Data – Trajectory Magazine http://trajectorymagazine.com We are the official publication of the United States Geospatial Intelligence Foundation (USGIF) – the nonprofit, educational organization supporting the geospatial intelligence tradecraft Tue, 20 Feb 2018 16:12:19 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.4 https://i2.wp.com/trajectorymagazine.com/wp-content/uploads/2017/08/cropped-TRJ-website-tab-icon-1.png?fit=32%2C32 Data – Trajectory Magazine http://trajectorymagazine.com 32 32 127732085 Future GEOINT: Data Science Will Not Be Enough http://trajectorymagazine.com/future-geoint-data-science-will-not-enough/ Thu, 01 Feb 2018 10:24:40 +0000 http://trajectorymagazine.com/?p=35868 The community must go beyond statistically analyzing data to truly gain understanding

The post Future GEOINT: Data Science Will Not Be Enough appeared first on Trajectory Magazine.

]]>
By the year 2020, many experts predict the global universe of accessible data to be on the order of 44 zettabytes—44 trillion gigabytes—with no signs of the exponential growth slowing. As a result, data science has quickly been thrust to the forefront of the international job market, and people cannot seem to get enough of it. Salaries for data scientists have increased significantly in the past few years, as the demand for a workforce fluent in scripting, machine learning (ML), and data analytics inundates jobsites spanning the globe. Websites such as glassdoor.com list the top three jobs of 2016 as data scientist, DevOps engineer, and data engineer. As a result, the market is responding with a slew of data science degree programs, with indicators that more students are opting for bachelor’s and master’s degrees over Ph.D.s, likely in an attempt to enter the competitive job market.

Supporting this observation is the fact that salaries for data scientists started to level off as an influx of new data scientists entered the job market. Still, the demand for data scientists remains high; and the race to score employment in one of the century’s “sexiest jobs” is arguably at its peak with little sign of slowing down.

The United States Intelligence Community (IC) is just now starting to demand these skills in earnest as it strives to maintain its leading edge to support national policy-makers and military forces, as well as to protect the nation’s borders and interests abroad. While the delayed diffusion of private sector practice to the public sector is not new, the speed of the technology growth has exacerbated the time lag and placed the IC behind the power curve. The rapid growth in sensor diversity and volume in the unmanned aerial systems (UAS) market alone, compounded with the resulting flood of derived products, structured observations, and increasing volumes of publicly available information is simply overwhelming analysts.

The data scientist’s ability to navigate petabytes of raw and unstructured data, then clean, analyze, and visualize the data, has routinely proven their value to the decision cycles of their often non-technical leaders. It is no wonder then the demand signal to meet the IC’s big data problem has created a buzz around data science, with many senior executives wanting more of “it.”

However, there is little strategic assessment of what actual skills will be needed in the future or how these emerging technologies and data science tools should reshape the IC’s organizational dynamics. But this is not solely the fault of senior executives. We feel there is a definition problem with data science; it is too general, too broad, and continually expanding. We also believe that while data science undoubtedly has a future in the National Geospatial-Intelligence Agency’s (NGA) vision to “Know the Earth, Show the Way, Understand the World,” the community must go beyond statistically analyzing data collected on the world around us to truly gain an understanding of the people who inhabit the world.

Future policy-makers and military leaders will be faced with a complex environment that is increasingly urban and unstable, where the observed complexity of people’s behavior over time is actually a reflection of the complexity of the system in which they are immersed. The information technology revolution we are witnessing with data science is allowing policy-makers and military leaders to see the complexity of the world they are trying to influence, but to grapple with this complexity will require new mental and organizational paradigms. Geospatial intelligence (GEOINT) is critical to the characterization and understanding of this complex world, providing the context and visualization necessary to support the decision-making process at all echelons. Perhaps for this reason, the most demanding area for advancement in computational tradecraft should be in the realm of GEOINT. The overwhelming volume, size, diversity, complexity, and speed at which geospatial data is generated requires significant improvements to the processes fielded by today’s GEOINT practitioners.

Future GEOINT practitioners will also need to apply these data to support requirements for near real-time human interpretation and synthesis into intelligence in order to describe and visualize the operating environment and provide objective predictions of physical and human actions. The National System for Geospatial Intelligence (NSG) must transition away from a discipline doctrinally constrained by multiple single-source stovepipes and embrace a multidisciplinary, dynamic, and computational analytic approach dedicated to addressing complex geographic and social issues.

During USGIF’s GEOINT 2017 Symposium, NGA signaled its intent to shift its workforce planning heavily toward data science, even suggesting it will no longer hire analysts without computer programming skills. Even the director of NGA is taking a Python course. Naturally, NSG members are following NGA’s lead, initiating pilots to build out data science capabilities within their current structures.

In all the demand for data scientists, something is lost—the fact that data science will not be enough for the future of GEOINT.

  • This article is part of USGIF’s 2018 State & Future of GEOINT Report. Download the PDF to view the report in its entirety and to read this article with citations. 

Data Science Undefined

Today’s NSG leaders are united in their recognition for the need to respond to the increasingly massive amounts of generated data—growing in veracity and volume—and want employees capable of searching, wrangling, and analyzing the massive amounts of data. These leaders seem to agree that data science is the profession appropriate to perform these duties, regardless of the fact that only a de facto definition for data science exists. A recent U.S. Air Force (USAF) white paper on Intelligence, Surveillance, and Reconnaissance (ISR) offers a definition in which data scientists “[extract] knowledge from datasets … find, interpret, and merge rich data sources; ensure consistency of datasets; create visualizations to aid in understanding data; build mathematical models using the data; and present and communicate data insights/findings to a variety of audiences.”

Data science is not a new phenomenon. In fact, as early as 1977, John Tukey—the scientist who coined the term “bit”—was developing statistical methods for digital data. In a 2013 blog “rant,” one author posits data science is simply the “application of statistics to specific activities.” Following that, “we name sciences according to what is being studied. … If what is being studied is business activity … then it is not ‘data science,’ it is business science.”

This is an extremely important counterpoint to the USAF ISR white paper that concludes, “adding ‘data science’ to an intelligence analyst’s job description would both diminish the focus on his or her core competency (intelligence analysis) and also result in sub-optimal data science.”

We feel the danger in the current NSG narrative is the expected degree of data science integration and focus. Integrating new technologies to one’s career field is critical, and we see that intelligence analysis should not be any different. It is not that data science is a powerful breakthrough in and of itself, but rather it is the application of computational analytic tools to enhance domain knowledge that demonstrates exponential gains.

Exacerbating the definition problem is the tendency of leaders to excitingly convolve developing technologies into data science, most likely based on the assertion that the technologies rely on big data and computers, and thus, are data science. Artificial intelligence—traditionally captured under the umbrella of “computer science”—is suddenly being lumped under “data science” as well, possibly because it requires massive training data.

This brings to light the emerging problem of senior leaders throughout NSG and industry blurring the lines of an already loose definition and searching for the rumored “unicorn:” a geospatial or imagery analyst that can map-reduce multiple near real-time data feeds from the cloud, develop a ML neural network and deploy them into the cloud, and improve computer vision to automatically extract targets—ultimately providing policy-makers with advanced visualizations of predictive assessments on socio-political activities. This turns the focus on data science into a cure-all black box instead of an integral tool that should be present in each career field.

While this may seem like semantics, it is an important point for the community to realize the implications of the narrative that data science can provide all the answers without knowledge of geospatial and social sciences. The NSG must establish a common understanding of what data science can, and, perhaps more importantly, cannot do in order to develop concrete strategies to move forward.

Data science is focused on how to access, store, mine, structure, analyze, and visualize data. This requires deep expertise in computational statistics and plays an important part in getting pertinent data from the information technology and computer science sphere into the hands of the geospatial and social scientists focusing on their subject matter expertise. However, this deep expertise comes at a price, as few data scientists will be experts in critical geospatial principles and rather will likely focus more on their ability to write processing scripts. For instance, while the application of pure data science has discovered new species through statistical signatures, it offers no information on “what they look like, how they live, or much of anything else about their morphology.” Data scientists are also not software developers, which means they are unlikely to implement the algorithms and develop the tools to auto extract objects from imagery or deploy progressive neural networks. Moreover, as tools implement more successful ML algorithms, analysts will likely be expected to elevate to higher-level tasks. For these reasons, it is highly unlikely data science is the answer to the problems geospatial and social scientists are trying to solve, but rather serves as a tool to be leveraged when and where appropriate.

The reality of the deluge of spatial-temporally enabled data is that it is both a data science problem and a geospatial domain problem. A modern weapon system offers an analogy. Soldiers spend countless hours developing the expertise to effectively employ the weapon system, while still performing some basic maintenance and operations. In direct support of this system, however, mechanics and system specialists that are part of the team complete most of the major maintenance and modernization. Consequently, when the weapon system is operating subpar, discussions between the crews and maintainers are critical. Similarly, the ability to script and their understanding of statistical algorithms will improve GEOINT analysts’ effectiveness operating their tools in addressing more complex issues, but the primary concentration should be on their fundamental intelligence tradecraft and domain knowledge. It follows that as the GEOINT analyst focuses on the challenges of synthesizing data on the complex geospatial and social environment into intelligence, computer and data scientists should focus less on basic data formatting and process simplification for GEOINT analysts, concentrating more on the challenges of researching, developing, processing, visualizing, and developing new data streams and tools that are critical to maintain the IC’s competitive edge.

The benefits of this symbiotic, multidisciplinary approach go beyond data science. GEOINT analysts who are versed in the foundations of computer and data science, and able to communicate with data and computer scientists, will be able to overcome the hurdle of data wrangling and advance toward geospatial computational social science. This position is in line with an earlier published RAND Corporation paper for the Defense Intelligence Agency, in which data science is termed “a team sport.”

The Future GEOINT Analyst

Geospatial computational social science (CSS) is an emerging area of diverse study that explores geographic and social science through the application of computing power, which includes data science. With origins closely tied to that of advanced computing and GIS, the geospatial CSS field is in relative infancy when compared to the traditional schools of sociology, political science, anthropology, economics, and geography. It is important to note CSS does not replace these traditional social sciences, but rather advances them through applications of computational methods. By leveraging high-performance computing, advanced geostatistical analytics, and agent-based modeling, geospatial CSS empowers a multidisciplinary approach to the development of methodologies and algorithms to gather, analyze, and explore complex geospatial and social phenomenon.

Geospatial CSS presents a nexus of geographical information science, social network analysis, and agent-based modeling. It will require a solid foundation in geographic principles and the ability to apply computational thinking to complex social problems. Already, there are programs being written that simulate a 1:1 ratio of humans to computer agents. Imagine a catastrophic scenario in a megacity such as New York, and being able to simulate what every human in the city may do in reaction to the event being layered over high-precision terrain, physical models of buildings, super and subterranean features, dynamic traffic patterns, and reactive infrastructures.

Data science methodologies will undoubtedly play a key part in future multidisciplinary teams, helping to find the proverbial “needle in the stack of needles.” Geospatial CSS, however, is not only about making statistical inferences based on zettabytes of spatial-temporal observations; it is concerned with the exploration of the theories and processes that result from interactions caused by the observables. In a complex world, the aggregation of these interactions provides more unique pathways to understanding the reasoning behind the behavior of our adversaries than would a holistic analysis of the whole system. To further the needle in a stack of needles analogy, geospatial CSS aims to provide insight into as to why a needle would fall a particular way, and into a particular position in space and time, in that stack of hay. Geospatial CSS will be key in advancing GEOINT to “Understand the World.”

Recommendations

The NSG should work closely with academia to shape future geospatial computational social scientists who will be able to apply advanced computational methods, such as agent-based modeling, social network analysis, geographic information science, and deep learning algorithms toward analyzing and understanding physical and human geographic behaviors. Whereas high-performance computing and ML and visualization fall mostly within computer science and image science, geospatial CSS presents the nexus of geographical information science, social science, and data science. Future GEOINT analysts will require enhanced skills, applying computational power to explore and test hypotheses based on social and geographic theory to truly achieve the understanding of human interactions.

We recommend the NSG focus analytical modernization initiatives on forming multidisciplinary teams to attack key intelligence questions using geospatial CSS now to refocus the narrative on future. We understand data science techniques are still widely needed now, but feel the NSG community must come together to decompose data science for the future, focusing on key skills that rely not only on data, but on advances in information technology (IT) architectures, computation, and the application of geospatial computation to the social sciences. This also will help to delineate and define tasks to establish government workforce structure and career development, especially in the Armed Services, where traditional career series such as Intelligence Specialist (0132), Physical Scientist (1301), or Operations Research Systems Analyst (1515) strictly define work roles.

History has repeatedly shown new technology does not change the conduct of war alone, but it is how new technologies are integrated which creates advantage. This work will also guide industry initiatives and shape academia for the future geospatial scientist, rather than risk investing in a skill set that may be superseded in the future. In other words, generalizing data science as a black box catchall will risk creating generalists. This would result in the NSG losing oversight of what truly matters, which are analysts utilizing pertinent spatial-temporal data to provide timely, accurate, and objective assessments to not only monitor and analyze observed activity, but also to provide understanding of the geographic and human processes. This understanding requires the application of advanced computational methods to support the intelligence needs of policy-makers and warfighters.

Headline Image: Tracey Birch, Michael Gleason, and Ian Blenke, members of the SOFWERX data science team, develop software during the ThunderDrone Rodeo at their newest facility in Tampa, Fla., October, 2017. ThunderDrone is a U.S. Special Operations Command initiative dedicated to drone prototyping, which focuses on exploring drone technologies through idea formation, testing, and demonstration. Photo Credit: Master Sgt. Barry Loo

The post Future GEOINT: Data Science Will Not Be Enough appeared first on Trajectory Magazine.

]]>
35868
The Cross-Flow of Information for Disaster Response http://trajectorymagazine.com/cross-flow-information-disaster-response/ Thu, 01 Feb 2018 09:08:14 +0000 http://trajectorymagazine.com/?p=35866 Efficiently and effectively sharing data across federal communities

The post The Cross-Flow of Information for Disaster Response appeared first on Trajectory Magazine.

]]>
The flow of geospatial information and services across organizational boundaries is ultimately the fulfillment of the social contract, or the expectations between the government and its people. In a humanitarian assistance/disaster response scenario, there is an expectation that an element of the government, whether federal, state, or local, will show up and provide some kind of situational awareness. This can range from hard copy maps to an application displaying the location of critical infrastructure.

Recently, we’ve learned a lot about the operations of the Federal Emergency Management Agency (FEMA), but the lead federal agency in disaster response situations varies depending on the specific statutes governing the response. The focus needed to satisfy federal, civil, and non-governmental needs involves the evolution of policies and tools. Policy issues include (but are not limited to) federally controlled unclassified data dissemination guidance and awareness of the guidance available to invoke Defense Support of Civil Authorities (DSCA). Tools include both specific databases and aggregation services for situational awareness data such as the Pacific Disaster Center’s EMOPS tool and many more. These are tools that enable the sharing of aggregated geospatial data—a need the public demands of its government in disaster situations.

Aggregation is arguably one of the most important functions to fulfilling this social contract. There must be a place to have threaded, focused discussions at a controlled unclassified level over a mobile device complementing all of the other communication methods used. Understanding the data within government and how it is managed is one part of this complex puzzle. Yet U.S. society demands that these approaches be designed and smartly executed as part of the services, or contract, paid for by taxpayer dollars.

The previous administration’s Executive Order 13556, issued in 2010, established a Controlled Unclassified Information policy, but the implementation of this effort to corral the wide range of differing unclassified control systems (including ”For Official Use Only” and “Law Enforcement Sensitive”) requires the replacement of “hundreds of different agency policies and associated markings.” This policy issue must be resolved as the geospatial community as a whole strives to understand the sharing paradigm in fulfilling the social contract. Commercial industry does a tremendous job supporting disasters, especially when one of the sponsor nations invokes the International Disaster Charter, a UN-derived agreement among 16 nations to share remote sensing information during major disasters, opening the spigot for all manner of geospatial data to flow into data lakes such as the Hazard Distribution Data System to assist first responders. Yet, the cross-flow of data can always be improved through the use of tools in concert with an awareness of how different elements of government work, especially civil-military relations in a disaster context.

One way to understand how the federal-civil community works with the Department of Defense (DoD) is to review Joint Publication 3-28, Defense Support of Civil Authorities. Analyzing the multifaceted dynamics of policies, however, tells only one part of the story. The real key is some kind of focus on shared access among first responders and other users of geospatial data during a crisis—people who leverage and use these data outside the world of theory and in the chaotic environment that is the modern world. The crucial component is having data at the point of need, which includes hard copy of the type provided during the recent hurricanes by the National Geospatial-Intelligence Agency (NGA) and the U.S. Geological Survey (USGS) in concert with the Defense Logistics Agency.

  • This article is part of USGIF’s 2018 State & Future of GEOINT Report. Download the PDF to view the report in its entirety and to read this article with citations. 

Point cloud-based visualizations complement powerful analysis provided by web-based analysis tools such as the National System for Geospatial-Intelligence Open Mapping Enclave during any kind of disaster response scenario or other geospatially referenced problem. Yet, despite the success, the challenge remains control. Not just control for security’s sake, but also program control to understand usage and system metrics in an agile programming environment. Or, managing shareholder/user expectations and ensuring profit while providing a cyber-resilient community service in a fast-paced geospatial intelligence (GEOINT) economy that demands success. Approaches to solve these problems must be tethered to a set of users ranging from novice to advanced, requiring uniform cataloguing of data and easy dissemination. This includes GIS-ready data lakes for the advanced user to manipulate complementing simple visualizations to provide key decision-makers with at-a-glance updates. Implementing rules that incorporate all data, categorized by confidence of quality (spatial and thematic) in a common language will allow decision-makers access to all data available and to take action accordingly.

At a minimum, a properly designed tool should incorporate source (state data, authoritative, ancillary, etc.), accuracy, projection/datum, classification/category, credential/permissions, and temporal. Yet the time to observe this information is also at a premium given the tyranny of the immediate need on mobile devices. Shared access, then, is more than simply providing geospatially-ready data and services. It is about understanding the consumer base and the psychology of the user at multiple points of need. It also means moving beyond the comfort of email into shared spaces to communicate.

Collaboration between the federal-civil, DoD community, and leaders of existing data aggregation programs/tools is imperative to the successful implementation of the suggested cross-flow of information. The existence of programs like the Federal Aviation Administration’s (FAA) Airport GIS is an example of such a program, and much can be learned and duplicated/avoided as a result of this effort. Airport GIS offers funding to airports that are already required to submit electronic airport layout plans (eALPs), and who submit such plans in a way that is digestible to the geospatial repository. This results in data that is already required to be captured, attributed, and delivered in a common format that can be stored in the FAA’s database as well as be used by the airport operations themselves for enhanced functionality and increased operational efficiency. Social business software also sets the stage for the aggregation of data and services, but this can only be accomplished if the user base complements their dependence on email communications.

Email is a useful tool, and yet, it ruins so much productivity in wasted communication. Users benefit from migrating to “complementary” services that reduce this wasted time, such as Dropbox and Jive. Fortunately, the geospatial community has access to many tools that can help link various datasets. One example is the Structured Analytic Gateway for Expertise (SAGE), a Jive-based platform for social business communication.

A DoD Inspector General report from 2013 highlights a powerful capability to share data among various users within the academic community. The recommendation is to create varied methods of communication for a range of users as part of the National Centers of Academic Excellence program such as through the use of the SAGE environment. This is a powerful capability that enables controlled, unclassified, and mobile access to data sponsored by a DoD component. It complements and, in some cases, replaces the need for email, especially in the arena of the federal civil community’s engagement with the Intelligence Community/DoD. The U.S. Geological Survey’s Civil Applications Committee (CAC) facilitates this effort, creating a disaster hub to share data previously sent out only via a large email alias. This data provides a hub for complex information sets that previously operated in the purview of large email aliases. Field data can also be leveraged by increasing deployment of software to the various app stores and through the use of shared software code facilitated by GitHub. One example of this is the Mobile Awareness GEOINT Environment (MAGE), recently used to support FEMA operations in Puerto Rico. This app was previously only available on the GEOINT App Store inside the DoD firewall but is now openly available for consumption by Apple or Android devices.

For example, these capabilities allow volcanologists dealing with an erupting volcano to quickly get data on their mobile devices, potentially bringing to bear the shared expertise of hundreds of imagery and other geospatial-services professionals. Similarly, a disaster manager working issues associated with Puerto Rican hospitals severely damaged in the wake of Hurricane Maria can receive and share appropriate geospatial datasets from a multitude of communities. This approach does not solve access problems associated with data management protections, but it does expose data efficiently to allow for shared discussion of a problem set. Questions can be posed and answered quickly without resorting to emails that often unintentionally leave crucial individuals out of the loop. All of these initiatives complement overarching and unifying efforts across the geospatial civil and military communities, such as the GeoPlatform developed under the auspices of the Federal Geographic Data Committee.

The cross-functional competencies within USGIF’s GEOINT Essential Body of Knowledge (EBK) are “synthesis, reporting, and analysis.” The description of synthesis is to identify, locate, and obtain essential information efficiently and effectively. Social business tools such as SAGE and MAGE under the auspices of full-spectrum geospatial support committees such as the CAC and Federal Geographic Data Committee (FGDC) allow the union of controlled, unclassified, and mobile access, and are an ideal venue to enable this aspect of the EBK. Policies will change and morph, but proper communications enabling a free-flowing exchange of ideas among a wide group of users is a definite path to the fulfillment of the social contract. Many tools have limited costs as long as the purpose of the endeavor is to support a federal agency or department’s statutory mission set, such as homeland security, disaster response, or even scientific and academic research. Continued funding is needed to provide the best possible support using taxpayer dollars.

The post The Cross-Flow of Information for Disaster Response appeared first on Trajectory Magazine.

]]>
35866
GEOINT at Platform Scale http://trajectorymagazine.com/geoint-platform-scale/ Thu, 01 Feb 2018 07:50:04 +0000 http://trajectorymagazine.com/?p=35872 Transforming GEOINT organizations from pipes to platforms

The post GEOINT at Platform Scale appeared first on Trajectory Magazine.

]]>
Today’s networked platforms are able to achieve massive success by simply connecting producers and consumers. Uber doesn’t own cars, but runs the world’s largest transportation business. Facebook is the largest content company, but doesn’t create content. Airbnb has more rooms available to its users than any hotel company, but doesn’t even own any property.

In his book, “Platform Scale: How an Emerging Business Model Helps Startups Build Large Empires with Minimum Investment,” Sangeet Paul Choudary describes how these companies have built two-sided markets that enable them to have an outsized impact on the world. He contrasts the traditional “pipe” model of production, within which internal labor and resources are organized around controlled processes, against the “platform” model, within which action is coordinated among a vast ecosystem of players. Pipe organizations focus on delivery to the consumer, optimizing every step in the process to create a single “product,” using hierarchy and gatekeepers to ensure quality control. A platform allows for alignment of incentives of producers and consumers, vastly increasing the products created and then allowing quality control through curation and reputation management. In this model, people still play the major role in creating content and completing tasks, but the traditional roles between producer and consumer become blurred and self-reinforcing.

A Platform Approach for Geospatial Intelligence

So, where does the geospatial world fit into this “platform” framework? Geospatial intelligence, also known as GEOINT, means the exploitation and analysis of imagery and geospatial information to describe, assess, and visually depict physical features and geographically referenced activities on Earth. In most countries, there is either a full government agency or at least large, dedicated groups who are the primary owners of the GEOINT process and results. Most of the results they create are still produced in a “pipe” model. The final product of most GEOINT work is a report that encapsulates all the insight into an easy-to-digest image with annotation. The whole production process is oriented toward the creation of these reports, with an impressive array of technology behind it, optimized to continually transform raw data into true insight. There is the sourcing, production, and operation of assets used to gather raw geospatial signal, and the prioritization and timely delivery of those assets. Then, there are the systems to store raw data and make it available to users, and the teams of analysts and the myriad tools they use to process raw data and extract intelligence. This whole pipe of intelligence production has evolved to provide reliable GEOINT, with a growing array of incredible inputs.

These new inputs, however, start to show the limits of the pipe model, as new sources of raw geospatial information are no longer just coming from inside the GEOINT Community, but from all over the world. The rate of new sources popping up puts stress on the traditional model of incorporating new data sources. Establishing authoritative trust in an open input such as OpenStreetMap is difficult since anyone in the world can edit the map. And the pure volume of information from new systems like constellations of small satellites also strains the pipe production method. Combining these prolific data volumes with potential sources of intelligence, like geo-tagged photos on social media and raw telemetry information from cell phones, as well as the process of coordinating resources to continually find the best raw geospatial information and turn it into valuable GEOINT, becomes overwhelming for analysts working in traditional ways.

The key to breaking away from a traditional pipe model in favor of adopting platform thinking is to stop trying to organize resources and labor around controlled processes and instead organize ecosystem resources and labor through a centralized platform that facilitates interactions among all users. This means letting go of the binary between those who create GEOINT products and those who consume them. Every operator in the field, policy analyst, and decision-maker has the potential to add value to the GEOINT production process as they interact with GEOINT data and products—sharing, providing feedback, combining with other sources, or augmenting with their individual context and insight.

Transforming GEOINT Organizations from Pipes to Platforms

The GEOINT organizations of the world are well positioned to shift their orientation from the pipe production of polished reports to providing much larger value to the greater community of users and collaborators by becoming the platform for all GEOINT interaction. Reimagining our primary GEOINT organizations as platforms means framing them as connectors rather than producers. Geospatial information naturally has many different uses to many people, so producing finished end products has a potential side effect of narrowing that use. In a traditional pipe model, the process and results become shaped toward the sources consuming it and the questions they care about, limiting the realized value of costly assets.

Becoming the central node providing a platform that embraces and enhances the avalanche of information will be critical to ensure a competitive and tactical advantage in a world where myriad GEOINT sources and reports are available openly. The platform will facilitate analysts being able to access and exploit data ahead of our competitors, and enable operators and end users to contribute unique insights instead of being passive consumers. The rest of this article explores in-depth what an organization’s shift from pipe production toward a platform would actually look like.

Rethinking GEOINT Repositories

A GEOINT platform must allow all users in the community to discover, use, contribute, synthesize, amend, and share GEOINT data, products, and services. This platform should connect consumers of GEOINT data products and services to other consumers, consumers to producers, producers to other producers, and everyone to the larger ecosystem of raw data, services, and computational processes (e.g., artificial intelligence, machine learning, etc.). The platform envisioned provides the filtering and curation functionality by leveraging the interactions of all users instead of trying to first coordinate and then certify everything that goes through the pipe.

Trust is created through reputation and curation. Airbnb creates enough trust for people to let strangers into their homes because reputation is well established by linking to social media profiles and conducting additional verification of driver’s licenses to confirm identity, and then having both sides rate each interaction. Trust is also created through the continuous automated validation, verification, and overall “scrubbing” of the data, searching for inconsistencies that may have been inserted by humans or machines. Credit card companies do this on a continuous, real-time basis in order to combat the massive onslaught of fraudsters and transnational organized crime groups seeking to syphon funds. Trust is also generated by automated deep learning processes that have been broadly trained by expert users who create data and suggest answers in a transparent, auditable, and retrainable fashion. This is perhaps the least mature, though most promising, future opportunity for generating trust. In such a future GEOINT platform, all three of these kinds of trust mechanisms (e.g., reputation/curation, automated validation/verification/scrubbing, expert trained deep learning) should be harnessed together in a self-reinforcing manner.

Most repositories of the raw data that contributes to GEOINT products attempt to establish trust and authority before data comes into the repository, governed by individuals deeply researching each source. The platform approach embraces as much input data as possible and shifts trust and authority to a fluid process established by users and producers on the platform, creating governance through metrics of usage and reputation. These repositories are the places on which we should focus platform thinking. Instead of treating each repository as just the “source” of data, repositories should become the key coordination mechanism. People searching for data that is not in the repository should trigger a signal to gather the missing information. And the usage metrics of information stored in the repository should similarly drive actions. Users of the platform, like operators in the field, should be able to pull raw information and easily produce their own GEOINT data and insights, then and contribute those back to the same repository used by analysts. A rethinking of repositories should include how they can coordinate action to create both the raw information and refined GEOINT products that users and other producers desire.

  • This article is part of USGIF’s 2018 State & Future of GEOINT Report. Download the PDF to view the report in its entirety and to read this article with citations. 

Core Value Units

How would we design a platform that was built to create better GEOINT products? In “Platform Scale,” Choudary points to one of the best ways to design a platform is to start with the “Core Value Unit,” and then figure out the key interactions to increase the production of that unit. For YouTube, videos are the core value unit, for Uber, it’s ride services, for Facebook, it’s posts/shares, and so on.

For GEOINT, we posit the core value unit is not simply a polished intelligence report, but any piece of raw imagery, processed imagery, geospatial data, information, or insight—including that finished report. For the purposes of this article, we’ll refer to this as the “Core Value Unit of GEOINT (CVU-GEOINT).” It includes any annotation that a user makes, any comment on an image or an object in an image, any object or trend identified by a human or algorithm, and any source data from inside the community or the larger outside world. It is important to represent every piece of information in the platform, even those that come from outside with questionable provenance. Trusted actors with reputations on the platform will be able to “certify” the CVU-GEOINT within the platform. Or they may decide it is not trustable, but will still use it in its appropriate context along with other trusted sources. Many CVU-GEOINTs may be remixes or reprocessing of other CVUs, but the key is to track all actions and data on the platform so a user may follow a new CVU-GEOINT back to its primary sources.

Maximizing Core Value Units of GEOINT

It is essential that as much raw data as possible be available within the platform, both trusted and untrusted. The platform must be designed to handle the tsunami of information, enabling immediate filtering after content is posted to the platform, not before. Sources should be marked as trusted or untrusted, but it should be up to users to decide if they want to pull some “untrusted” information, and then, for example, certify as trusted the resulting CVU-GEOINT because they cross-referenced four other untrusted sources and two trusted sources that didn’t have the full picture. Open data sources such as OpenStreetMap, imagery from consumer drones, cell phone photos, and more should be available on the platform. The platform would not necessarily replicate all the data, but it would reference it and enable exploitation. These open data sources should be available to the full community of users, as the more people that use the platform, the more signal the platform gets on the utility and usefulness of its information, and, subsequently, more experts can easily analyze the data and certify it as trusted or untrusted.

It should be simple to create additional information and insight on the platform, where the new annotation, comment, or traced vector on top of some raw data becomes itself a CVU-GEOINT that another user can similarly leverage. An essential ingredient to enable this is to increase the “channels” of the platform, enabling users and developers in diverse environments to easily consume information and also contribute back. This includes standards-based application programming interfaces (APIs) that applications can be built upon and simple web graphical user interface (GUI) tools that are accessible to anyone, not just experts. It would also be important to prioritize integration with the workflows and tool sets that are currently the most popular among analysts. The “contribution back” would include users actively making new processed data, quick annotations, and insights. But passive contribution is equally important—every user contributes as they use the data, since the use of data is a signal of it being useful, and including it as a source in a trusted report is also an indication of trust. The platform must work with all the security protocols in place, so signal of use in secure systems doesn’t leak out to everyone, but the security constraints do not mean the core interactions should be designed differently.

Filtering Data for Meaning

Putting all the raw information on the platform does risk overwhelming users, which is why there must be complementary investment in filters. Platforms such as YouTube, Facebook, and Instagram work because users get information filtered and prioritized in a sensible way. Users don’t have to conduct extensive searches to find relevant content—they access the platform and get a filtered view of a reasonable set of information. And then they can perform their own searches to find more information. A similar GEOINT platform needs to provide each user with the information relevant to them and be able to determine that relevance with minimal user input. It can start with the most used data in the user’s organization or team, or the most recent in areas of interest, but then should learn based on what a user interacts with and uses. Recommendation engines that perform deep mining of usage and profile data will help enhance the experience so that all the different users of the platform—operators in the field, mission planning specialists, expert analysts, decision-makers, and more—will have different views that are relevant to them. Users should not have to know what to search for, they should just receive recommendations based on their identity, team, and usage patterns as they find value in the platform.

The other key to great filtering is tracking the provenance of every piece of CVU-GEOINT in the platform so any derived information or insight also contains links to the information that came from it. Any end product should link back to every bit of source information that went into it, and any user should be able to quickly survey all data pedigrees. Provenance tracking could employ new blockchain technologies, but decentralized tracking is likely not needed initially when all information is at least represented on a centralized platform.

Building readily available source information into the platform will enable more granular degrees of trust; the most trusted GEOINT should come from the certified data sources, with multiple trusted individuals blessing it in their usage. And having the lineage visible will also make usage metrics much more meaningful—only a handful of analysts may access raw data, but if their work is widely used, then the source asset should rise to the top of most filters because the information extracted from it is of great value. If this mechanism is designed properly, the exquisite data would naturally rise to the surface, above the vast sea of data that still remains accessible to anyone on the platform.

It is important to note that such a platform strategy would also pay dividends when it is the divergent minority opinion or insight that holds the truth, or happens to anticipate important events. The same trust mechanisms that rigorously account for lineage will help the heretical analyst make his or her case when competing for the attention of analytical, operational, and policy-making leadership.

The Role of Analysts in a Platform World

To bootstrap the filtration system, the most important thing is to leverage the expert analysts who are already part of the system. This platform would not be a replacement for analysts; on the contrary, the platform only works if the analysts are expert users and the key producers of CVU-GEOINT. Any attempt to transform from the pipe model of production to a platform must start with analysts as the first focus, enabling their workflows to exist fully within a platform. Once their output seamlessly becomes part of the platform, then any user could easily “subscribe” to an analyst or a team of analysts focused on an area. The existing consumers of polished GEOINT products would no longer need to receive a finished report in their inbox that is geared exactly to their intelligence problem. Instead, they will be able to subscribe to filtered, trusted, polished CVU-GEOINT as it is, configuring notifications to alert them of new content and interacting with the system to prioritize the gathering and refinement of additional geospatial intelligence.

The consumption of GEOINT data, products, and services should be self-service, because all produced intelligence, along with the source information that went into it, can be found on the platform. Operators would not need to wait for the finished report; they could just pull the raw information from the platform and filter for available analyst GEOINT reports. Thus analysts shift to the position of the “curators” of information instead of having exclusive access to key information. But this would not diminish their role—analysts would still be the ones to endow data with trust. Trust would be a fluid property of the system, but could only be given by those with the expert analyst background. This shift should help analysts and operators be better equipped to handle the growing tsunami of data by letting each focus on the area they are expert in and allowing them to leverage a network of trusted analysts.

The other substantial benefit of a platform approach is to integrate new data products and services using machine learning and artificial intelligence-based models. These new models and algorithms have the promise to better handle the vast amounts of data being generated today, but also risk overwhelming the community with too much information. In the platform model, the algorithms would both consume and output CVU-GEOINT, tracking provenance and trust in the same environment as the analysts. Tracking all algorithmic output as CVU-GEOINT would enable analysts to properly filter the algorithms for high-quality inputs. And the analyst-produced CVU-GEOINT would in turn be input for other automated deep learning models. But deep learning results are only as good as their input, so the trusted production and curation of expert analysts becomes even more important in the platform-enabled, artificial intelligence-enhanced world that is fast approaching. The resulting analytics would never replace an analyst as it wouldn’t have full context or decision-making abilities, but the output could help analysts prioritize and point their attention in the right direction.

Recommendations for GEOINT Organizations

Reimagining GEOINT organizations as platforms means thinking of their roles as “trusted matchmakers” rather than producers. This does not mean such agencies should abdicate their responsibilities as a procurer of source data. But, as a platform, they should connect those with data and intelligence needs with those who produce data. And this matchmaking should be data-driven, with automated filters created from usage and needs. Indeed the matchmaking should extend all the way to prioritizing collections, but in a fully automated way driven by the data needs extracted from the system.

A GEOINT organization looking to embrace platform thinking should bring as much raw data as possible into the system, and then measure usage to prioritize future acquisitions. It should enable the connection of its users with the sources of information, facilitating that connection even when the utility to the users inside the agency is not clear.

  • Be the platform for GEOINT, not the largest producer of GEOINT, and enable the interaction of diverse producers and consumers inside the agency with the larger intelligence and defense communities and with the world.
  • Supply raw data to everyone. Finished products should let anyone get to the source.
  • Govern by automated metrics and reputation management, bring all data into the platform, and enable governance as a property of the system rather than acting as the gatekeeper.
  • Create curation and reputation systems that put analysts and operators at the center, generating the most valuable GEOINT delivered on a platform where all can create content. Enable filters to get the best information from top analysts and data sources by remaking the role of the expert analyst as curators for the ecosystem rather than producers for an information factory.

The vast amounts of openly available geospatial data sources and the acceleration of the wider availability of advanced analytics threaten to overwhelm traditional GEOINT organizations that have fully optimized their “pipe” model of production. Indeed there is real risk of top agencies losing the traditional competitive advantage when so much new data can be mined with deep learning by anybody in the world. Only by embracing platform thinking will organizations be able to remain relevant and stay ahead of adversaries, and not end up like the taxi industry in the age of Uber. There is a huge opportunity to better serve the wider national security community by connecting the world of producers and consumers instead of focusing on polished reports for a small audience. The GEOINT organization as a platform would flexibly serve far more users at a wider variety of organizations, making geospatial insight a part of everyday life for everyone.

The post GEOINT at Platform Scale appeared first on Trajectory Magazine.

]]>
35872
Everything, Everywhere, All the Time—Now What? http://trajectorymagazine.com/everything-everywhere-time-now/ Thu, 01 Feb 2018 07:35:59 +0000 http://trajectorymagazine.com/?p=35882 What will result from the persistence and depth of geospatial data?

The post Everything, Everywhere, All the Time—Now What? appeared first on Trajectory Magazine.

]]>
A near-clairvoyant ability to develop knowledge on everything, everywhere, all the time is fictitiously portrayed in TV shows such as 24, Person of Interest, The Wire, Alias, and Homeland. However, recent proliferation of new sensors, the integration of humans and machines, and the advent of big data analytics provide new opportunities to translate this portrayed drama and excitement of intelligence fiction into intelligence fact. The persistence and depth of data now readily available allows new products to be woven together out of three basic threads or classes of information: vector-based knowledge (everything), locational knowledge (everywhere), and temporal knowledge (all the time).

As we move to an era of ubiquitous, real-time information, economists, first responders, business intelligence analysts, scientific researchers, intelligence officers, and many other analysts have the potential to answer questions previously unimagined. However, reaching this potential future vision will require the geospatial intelligence (GEOINT) Community to overcome several distinct challenges.

New GEOINT Sources for a New World

Where analysts previously relied on only a few sources, today’s GEOINT professionals have a plethora of new, non-traditional sources from which to choose. Increasingly proliferated and persistent small satellites, drones, and other emerging commercial capabilities contribute greatly to the wealth of information by complementing traditional airborne and spaceborne GEOINT collection systems. At the same time, the convergence of sensing, communication, and computation on single platforms combined with the ubiquity of the internet and mobile devices have further increased the variety of data available.

Traditional and proven imagery capabilities based on large government and commercial aircraft and spacecraft have been augmented by increasingly capable small satellites that cost less to produce and are easier to launch. Small sats, picosats, and even smaller versions being created in the past decade have proliferated new remote sensing capabilities that increase revisit rates and cover larger portions of the electromagnetic spectrum. Closer to Earth, affordable commercial drones with high-resolution imaging, multi/hyper-spectral sensors, high-definition video, and other capabilities have revolutionized all aspects of data collection, from hobby photography to agriculture to archaeology. Small sats also contribute to the U.S. military mission by providing easier and faster access to communication, positioning, navigation, timing, and weather data. As these sensors become more affordable, pervasive, and persistent, new users across industry, academia, and government will be able to leverage increasingly capable systems to improve access to all forms of GEOINT.

Crowdsourcing, or “participatory sensing,” is defined in the 2011 Merriam-Webster’s Dictionary as “the practice of obtaining needed services, ideas, or content by soliciting contributions from a large group of people and especially from the online community rather than from traditional employees or suppliers.” Crowdsourcing plays a major role in creating information-rich maps, collecting geo-localized human activity, and working collaboratively. This relative newcomer to the GEOINT tool kit has been utilized effectively in crisis mapping efforts such as DigitalGlobe’s Tomnod, a volunteer, public crowdsourcing community that gained popularity during the 2014 search for Malaysian Airlines Flight 370 and the aftermath of the 2015 Nepal Earthquake. In the commercial sector, companies like Findyr, Native, and Spatial Networks provide high-fidelity, street-level, near real-time contextual data from a worldwide, hyper-local audience of participatory geographers.

While intelligence revolutions often rely on the advent of new collection systems, a dominant driver for the future of GEOINT includes multisource data persistently generated and processed by intelligent machines. National Geospatial-Intelligence Agency (NGA) Director Robert Cardillo recently named artificial intelligence (AI) and machine learning (ML) technologies a top priority for U.S. GEOINT Community analysis: “If we attempted to manually exploit all of the imagery we’ll collect over the next 20 years, we’d need 8 million imagery analysts.” Cardillo also noted automation is needed to augment human analysts. The development of ML algorithms for automated change detection to handle the increasing load of imagery data will free up human analysts to continue working on “higher-order thinking to answer broader questions,” said Scot Currie, director of NGA’s Source Mission Integration Office. Such algorithms also have the potential to discover unknown relationships invisible to cognitively biased humans, generate unconventional hypotheses, and anticipate potential outcomes based on continuously learned causal models.

Included in these techniques are automated feature-based image registration, automated change finding, automated change feature extraction and identification, intelligent change recognition, change accuracy assessment, and database updating and visualization. The GEOINT analyst of the near future will operate as a member of a blended human-machine team that leverages the best skills of each to answer more questions with more information over a wider range of issues on a shorter timeline.

  • This article is part of USGIF’s 2018 State & Future of GEOINT Report. Download the PDF to view the report in its entirety and to read this article with citations. 

Living and Working in a Persistent Knowledge Environment

Highlighting The Economist’s description of “data as the new oil”—a valuable commodity driving our economy, NGA Director of Capabilities Dr. Anthony Vinci has challenged industry partners to “turn it into plastic.” The tradecraft of a GEOINT analyst now lies in the ability to quickly synthesize this highly adaptable resource into intricate, creative, and useful products previously unforeseen or unimagined.

In a persistent information world, every object and entity on, above, or below the surface of the Earth may be represented as a vector of its attributes—i.e., all the metadata about that entity. This extends the analytic paradigm to a knowledge environment in which every property of every entity is always available in real time. Analysts will be able to create comprehensive datasets about specific vectors of interest—be it individuals, groups of people, a particular building, or a particular type of infrastructure. To know “everything” in this sense means being able to perform a deep dive on any person, place, or thing and gain insight on how their attributes are distributed spatially. In addition, this new wave of data allows us to dispense with the old pick-and-choose mentality and perform this level of examination on all subjects at the same time. If this capability sounds far-fetched, the proliferation of sensor-enabled, internet-connected mobile devices—the so-called Internet of Things (IoT)—seems poised to introduce a paradigm in the not-too-distant future in which almost every entity on Earth beacons vectorized metadata into a ubiquitous data cloud.

In this world, it is also possible to create a complete dataset about any location on Earth. For a given place, we can gather data about topography, weather and climate, population density and demographics, local populations, recent conflicts, infrastructure, land cover, and 3D renders and imagery of buildings. Aggregating these data allows for a complete snapshot not only of any given area, but of the whole Earth at once. Imagine a spinning Google Earth globe with an infinite number of layers and an infinite level of detail at every altitude from the Mariana Trench to geostationary Earth orbit updated in real time. The challenge for the analyst holding such a globe is simply where to start.

As persistent data providers blanket the Earth and index data in accessible, online repositories, analysts build upon the immense place- and vector-oriented datasets over long periods of time. This exposes movements of population and demographic shifts, changes in weather and climate as well as land cover, destruction or construction of infrastructure, and the movement of conflict hot spots over time. By integrating all of these datasets, we can connect patterns between any number of variables across time. The concept of real time extends to all time.

With persistent knowledge across vector, location, and temporal domains, analysts can instantly exploit extremely high-resolution data about every concept of interest in every location, and refresh it on a daily basis, if not more frequently. However, the question remains, “So what?” It certainly seems interesting to have persistent data, but what can we do with them that we couldn’t do with simple big data? Are there questions we can answer now that we couldn’t before?

Answering New Questions

As anyone with a five-year-old child can attest, the most dreaded word in the English language is “why.” Children approach the world with relentless inquisitiveness, but “why” questions are taxing to answer. Early GEOINT collection and analysis capabilities constrained analysts to answering questions of what, where, and how many, but modern analytic advances open new avenues for who, how, and most importantly, why. The GEOINT environment of the future will reinvigorate the curious five-year-old in all of us.

The ability to rapidly ask questions and instantly receive answers from a near-infinite amalgamation of information gives analysts a Google-like ability to comprehend the world. New analysts will develop a deep understanding of geospatial and cultural issues in a fraction of the time required with infrequent, periodic collection and multiyear analytic efforts. Using app-based micro-tasking capabilities, an intelligence analyst in Virginia might interact in a video chat session with a protester in Cairo to understand why people are protesting and anticipate future areas of violence.

Analysts in the past operated in a sparse data environment where they waited to get data that in some cases was never collected or not processed in time. In an environment of instant, persistent data, it is likely that many knowable facts might be sensed by multiple phenomena that don’t always generate the same interpretation. The pace of weighing, judging, integrating, and verifying information will increase dramatically. Decision-makers will require an unrelenting operational tempo and a near-superhuman expectation of omniscience. It now falls to analysts and analytics to make sense of everything in context.

Integrating cultural, social, and economic information within GEOINT analysis significantly enhances analyst understanding over object-focused analysis. Human geography, while not technically a new source, is being used in new ways to provide fresh insights. By applying the who, what, when, why, and how of a particular group to geographic locations, analysts can create maps to track the social networks of terrorist groups, documenting complex interactions and relationships. By mapping the evolution and movement of ideas, activities, technologies, and beliefs, analysts develop deep contextual maps that combine “where” and “how” to convey “why” events are occurring (e.g., the rise of the Islamic State).

By integrating information about a specific area, such as who is in charge, what language they use, who they worship, what they eat, etc., analysts can create information mash-ups on the web that help planners and decision-makers safely and effectively anticipate potential future violence, deliver humanitarian aid, or improve regional stability. Human terrain information, introduced at broad scale during the Iraq and Afghanistan conflicts, will increasingly become part of a standard foundational GEOINT layer included in all cartographic products.

Collaborative analytic teams will extend existing operating procedures based on text-based Jabber sessions to “always-on” telepresence where geographically dispersed analysts interact as though they are in the same room and time zone. Perhaps these teams will even break down collaboration barriers across organizations, time zones, cultures, languages, and experience levels. Multidisciplinary teams working in a persistent knowledge environment can change their mind-set and answer new questions, especially the elusive “why.”

Overcoming New Challenges

While the proliferation of sensors and big data seemingly on demand may lead us to believe omniscience is truly within reach, several distinct challenges currently impede our vision for the future. First, if the data exists, can everyone access it? Should they? Data’s democratization has made excessively large volumes of data available to anyone who can search and download from the internet. But, as the saying goes, you get what you pay for. Thousands of websites offer free investment advice, but can you beat the market when everyone has access to the same free data? For example, the much-touted data.gov repository boasts nearly 200,000 free public datasets, but data are not always well described or easy to navigate.

Even when freely available data points to a logical conclusion, skepticism should arise. We cannot always know the origin of openly available data, be sure it has not been altered, assume scientific correction factors have been applied appropriately to raw data, or confirm metadata has been tagged correctly. Additionally, we may not know the intent of the person who made the data available or the biases that may have been introduced. In short, data veracity can be questioned when one does not fully control the data supply chain. Constant verification and vetting of sources may take over a majority of the analytic time bought back by advanced automated algorithms.

Freely available data’s omnipresence can overwhelm any analytic workflow, even with powerful big data processing, thereby quickly becoming an analyst’s self-imposed analytic quagmire. Many analysts will brave the overwhelming to ferret out insight that hides within this deluge. Data can be conditioned and formats standardized for compatibility. But to what benefit if the combined dataset is impossible to search, filter, fuse, and explore in a realistic time frame?

As commercial market demand for remotely sensed data and knowledge products continues to evolve and expand and barriers to market entry become lower, new vendors continue to emerge. A critical question arises: Can the government afford to pay for data and will commercial companies survive if they don’t? In 2010, NGA awarded two 10-year contracts for commercial imagery to DigitalGlobe and GeoEye with a combined value of $7.3 billion, but two years later, funding shortfalls caused the companies to merge.

Social media harvesting and sentiment mining is popular, but Twitter aggregator Sifter charges users a simple pricing model of $50 per 100,000 tweets. (Twitter estimates there are about 200 billion tweets in a year.) The Intelligence Community’s noble attempt to connect all the dots to ensure the U.S. does not experience a surprise terrorist or military attack underscores the desire to acquire and examine “all” available data. Whether government or commercial, it may be cost-prohibitive to purchase and examine all collected data to ensure competitive advantage—going infinitely and indefinitely global might carry a similarly infinite price tag. Overly narrowing the focus of collection might limit opportunities to stumble upon the singular “missing dot.”

Additionally, licensing and usage rights that protect commercial business often inhibit redistribution of data to other individuals, departments, or agencies. The U.S. government’s contract with DigitalGlobe limits imagery use to within federal agencies to “minimize the effects on commercial sales.” NGA’s 2017 $14 million contract to San Francisco-based imagery start-up Planet tests a subscription-based model to “access imagery over 25 select regions of interest” for one year. Despite its widespread use as a source of human activity data, the Twitter Terms of Service prevent usage of data “for surveillance purposes” and “in a manner inconsistent with our users’ reasonable expectations of privacy.” Key issues of perpetual data ownership, lineage to source data and processing settings, privacy protections, and long-term archive requirements will challenge traditional concepts of data ownership.

Finally, the ubiquity of spatial data of all kinds raises new privacy concerns. Policies have been developed to govern how different “INTs” can be used, but when source data can be worked into new products and discoveries, protection of citizens from continuous monitoring becomes increasingly difficult. A GEOINT savvy workforce must also include lawyers, psychologists, law enforcement personnel, and even politicians. 

Succeeding in a Persistent World

The democratization of GEOINT and the expectation of omniscient, instant knowledge of every activity, event, and object on Earth puts new pressures on the GEOINT workforce. Similar pressure exists in commercial industry, such as in the financial sector, to ensure better knowledge than competitors about issues such as trade volumes, raw material supply, and transportation networks. In the business of intelligence, competitors are threats that evolve across changing geopolitical and economic environments. The stakes are more than financial—they are existential. As GEOINT becomes more persistent, pervasive, and accessible, it will also become increasingly able to answer new questions, develop new products, and enhance the GEOINT workforce with new tradecraft.

Headline Image: Data visualization software is displayed for attendees of the Cyber Capability Expo at the newest SOFWERX facility in Tampa, Fla., October 2017. The expo sought to identify novel and provocative cyber technologies to meet current and future special operations forces requirements. Photo Credit: Master Sgt. Barry Loo

The post Everything, Everywhere, All the Time—Now What? appeared first on Trajectory Magazine.

]]>
35882
Harris Corporation: Enabling the End User http://trajectorymagazine.com/harris-corporation-enabling-end-user/ Wed, 31 Jan 2018 17:39:32 +0000 http://trajectorymagazine.com/?p=35820 Q&A with Erik Arvesen, vice president and general manager, Geospatial Solutions

The post Harris Corporation: Enabling the End User appeared first on Trajectory Magazine.

]]>
What products and services does Harris offer the GEOINT Community?

Erik Arvesen, vice president and general manager, Harris Geospatial Solutions

Harris works with myriad customers and mission sets ranging from defense commands and civil government to utility companies and commercial business. Each of those has very different challenges. After assessing the specific needs of a customer, we apply our diverse tools and services. We offer everything from real-time onboard data processing for airborne sensors to ground processing systems. We have advanced off-the-shelf software like ENVI, which extracts actionable information from remote sensing data through change detection. We use Geiger-mode LiDAR sensors for applications from the U.S. Geological Survey to utility companies looking at transmission lines. We also have different machine learning and video analysis capabilities.

How would you describe your company culture?

The passion our teams and individuals have for our customers’ missions is intoxicating and resonates throughout our facilities in Colorado, Los Angeles, Florida, St. Louis, Rochester, N.Y., and the D.C. area. Additionally, the company’s heritage is based on cutting-edge technology. The ENVI software team and the company’s Visual Information Systems, for example, have technology-centric mindsets. All this goes back to understanding the customer’s specific challenges and how we can use innovative technology to solve their problems.

How has Harris evolved to keep pace with the explosion of geospatial technology in the last decade?

Having competence and comfort in solving complex problems using multiple data sources is a big part of our core capability and has been for a long time. The explosion of geospatial data tees up the marketplace and the commercial landscape into our sweet spot. We deal well with highly complex situations. The more complicated the problem is, the more intriguing it is for our teams. To deal with that complexity, we employ deep learning tools to specific imagery sets, going well beyond simple object detection.

What sets Harris apart from its competition?

It’s a combination of experience and investment in the future. We have more than 30 years of experience developing geospatial solutions using cutting-edge technology, so organizations seek our in-depth knowledge. Many of our analytic solutions are applicable from one customer mission to another. I’m a firm believer in this mindset of looking through the end customer’s lens first and backing into a solution that best fits their needs. Additionally, we’re investing quite a bit of internal research and development dollars in deep learning, artificial intelligence, and hardware for LiDAR.

What emerging GEOINT trends are Harris most excited about and how are you leveraging them to support your mission?

We’re seeing an emphasis on the use of commercial and open-source GEOINT data, products, and services. I’ve spoken with several strategy leaders at the National Geospatial-Intelligence Agency (NGA)—they definitely want and need to move toward commercial open source. That resonates with our strategy.

Similarly, there’s an emphasis on small sat capabilities. This new paradigm of imaging the entire Earth every day means oceans of data are flooding in. In turn, that data intake requires more automation and machine learning. Having those capabilities allows us to continue managing the data coming in from our partners, like Planet and DigitalGlobe.

Also, new cloud platforms like Earth on AWS and DigitalGlobe’s GBDX are exciting for our engineers. It’s all phenomenal technology, but the real fun for Harris is understanding the customer’s needs and figuring out how to employ the tools we’ve spent a lot of time and money on to solve the challenges at hand.

The post Harris Corporation: Enabling the End User appeared first on Trajectory Magazine.

]]>
35820
Weekly GEOINT Community News http://trajectorymagazine.com/weekly-geoint-community-news-41/ Mon, 29 Jan 2018 16:29:51 +0000 http://trajectorymagazine.com/?p=36019 National Space Defense Center Transitions to 24/7 Operations; Northrop Grumman to Form Innovation Systems Business Unit; Orbital ATK to Research Hypersonic Engines with DARPA; More

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

]]>
National Space Defense Center Transitions to 24/7 Operations

Less than a year after changing the name of the Joint Interagency Combined Space Operations Center to the National Space Defense Center (NSDC), the NSDC transitioned to 24/7 operations Jan. 8, marking a significant step for the expanding interagency team focused on protecting and defending the nation’s critical space assets. The NSDC directly supports unified space defense efforts and expands information sharing in space defense operations among the DoD, the National Reconnaissance Office, and other interagency partners.

Northrop Grumman to Form Innovation Systems Business Unit

Northrop Grumman announced it will stand up a new innovation systems business unit following its $9.1 billion purchase of Orbital ATK, which is expected to close in the first half of 2018. Blake Larson, Orbital’s COO, has been chosen to lead the innovation systems business.

Orbital ATK to Research Hypersonic Engines with DARPA

DARPA’s Advanced Full Range Engine (AFRE) program has called on Orbital ATK to help study the use of turbine and hypersonic engines in a new jet propulsion system. If successful, the engine will be able to operate in a range of speeds, from low-speed runway takeoff to hypersonic flight. Orbital ATK maintains multiple hypersonic test facilities on the East Coast and offers custom testing for the developing technology.

DARPA Exploring DNA as Molecular Computing Platform

DARPA is investigating DNA as a possible molecular computing platform capable of storing, retrieving, and processing massive data collections much faster than traditional electronics. One University of Washington computer science team wants to create a DNA-based image search engine by coding image features into DNA strings. Users would type in a coded query to extract sequences that match the query using magnetic nanoparticles. A sequencer and a few more algorithms would turn those sequences back into visible images. Other researchers are investigating atoms, ions, photons, electrons, and even organic compounds as potential data processors.

Google’s Lunar Xprize to End Without Winner

More than 10 years since its introduction, Google’s Lunar Xprize challenge for companies to land spacecraft on the moon will end with no viable winner. The four teams in competition either lack the capital to continue development or are not ready to launch this year. Google, which extended the challenge deadline from 2014 to 2015 to 2018, does not plan to allow another extension, and says it is pleased with the groundbreaking progress and companies that formed as a result of the competition.

USGIF and Hexagon Offer Software Grants to Universities

USGIF has partnered with Hexagon Geospatial to offer software licenses to 14 universities and colleges. Students and faculty at USGIF-accredited schools will receive three years of free access to Hexagon’s desktop and cloud-based Smart M.App software. Smart M.Apps are interactive mapping tools designed to help build geospatial cloud applications by combining content, analysis, and workflows.

Peer Intel

The White House reports President Trump will appoint Suzette Kent, a principal at professional services firm EY, as federal chief intelligence officer. She’ll be appointed as administrator of the Office of Management and Budget’s office of electronic government, where she’ll oversee cybersecurity and IT management regulations.

Kent Matlick, senior vice president and general manager of Vencore’s intelligence group, was named to the Intelligence and National Security Alliance’s new advisory committee. 

Vice Adm. Jan Tighe, the Navy’s director of intelligence, recently submitted her retirement paperwork. Nominated for her position is Vice Adm. Matthew Kohler, former commander of the Naval Information Forces Command.

Photo Credit: Dennis Wise/University of Washington

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

]]>
36019
The Tipping Point for 3D http://trajectorymagazine.com/tipping-point-3d/ Fri, 12 Jan 2018 14:59:36 +0000 http://trajectorymagazine.com/?p=35748 The ability to fully harness 3D data is rooted in acquisition and scalability

The post The Tipping Point for 3D appeared first on Trajectory Magazine.

]]>
The application of location intelligence and the incorporation of 2D maps and positioning have become ubiquitous since the advent of smartphones. Now, we are entering a new era in which we can harness the power of 3D data to improve consumer experiences, as well as applications for enterprise, public safety, homeland security, and urban planning. 3D will play a more significant role in these experiences as we overcome the technical barriers that have made it previously difficult and cost-prohibitive to acquire, visualize, simulate, and apply to real-world applications.

Outdoor Data: Our World Is 3D

In a geo-enabled community in which we strive for more precise answers to complex spatial intelligence questions, traditional 2D correlation is a limiting factor. When you think about 3D data and maps, modeling buildings in an urban environment seems obvious. However, 3D is incredibly important when trying to understand the exact height of sea level or the uniformity of roads and runways. For example, one can imagine the vast differences in 2D versus 3D data and its application during the 2017 hurricane season. By including the Z-dimension in analysis, we can achieve true, precise geospatial context for all datasets and enable the next generation of data analytics and applications.

Access to 3D data for most geospatial analysts has been limited. Legacy 3D data from Light Detection and Ranging (LiDAR) and synthetic aperture radar sensors has traditionally required specialized exploitation software, and point-by-point stereo-extraction techniques for generating 3D data are time-consuming, often making legacy 3D data cost-prohibitive. Both products cost hundreds to thousands of dollars per square kilometer and involve weeks of production time. Fortunately, new solutions provide a scalable and affordable 3D environment that can be accessed online as a web service or offline for disconnected users. Users can stream, visualize, and exploit 3D information from any desktop and many mobile devices. Models of Earth’s terrain—digital elevation models (DEMs)—are increasingly used to improve the accuracy of satellite imagery. Although viewed on a 2D monitor, DEMs deliver the magic through a true 3D likeness for bare-earth terrain, objects like buildings and trees, contours, or floor models, and unlimited contextual information can be applied to each measurement. This provides a true 3D capability, replacing current “2.5D” applications that aim to create 3D models out of 2D information, at a cost point closer to $10 to $20 per square kilometer and only hours of production time.

Indoor Data: Scalable 3D Acquisition

As 3D becomes more common in outdoor applications, its use for indoor location is being explored. Unsurprisingly, similar challenges need to be overcome. Until now, real-time indoor location intelligence has been difficult to achieve. This is largely due to the absence of, or difficulty in obtaining, real-time maps and positioning data to form the foundation for the insights businesses can derive about their spaces.

Image courtesy of InnerSpace.

To create a 3D model of a room, businesses stitch together blueprints, photos, laser scans, measurements, and hand-drawn diagrams. Once the maps and models are in hand, operators must create location and positioning infrastructures that accurately track people and things within the space. Operators need to position these sensors—typically beacons—precisely according to the building map, but the beacons have no knowledge of the map and cannot self-regulate to reflect any changes to the environment. If the beacon is moved, its accuracy is degraded and its correlation to the map breaks down. Overall, this process is lengthy, cost-prohibitive, and fraught with error.

Using current methodology, professional services teams work with a wide variety of tools—LiDAR trolleys, beacons, 3D cameras, and existing architectural drawings—to compose an accurate representation of a space. Additionally, the resulting system becomes difficult to maintain. Physical spaces are dynamic, and changes quickly render maps obsolete. Changes to technology used to create the models or track assets require ongoing management, and the process rapidly becomes overly complex. This complexity is stalling innovation across myriad industries, including training and simulation, public safety and security, and many consumer applications. While organizations and consumers can easily find data for outside locations using GPS, no equivalent data source exists for indoor location data.

Emerging location intelligence platforms leverage interconnected sensors that take advantage of the decreasing cost of off-the-shelf LiDAR components and the ubiquity of smartphones, wearables, and wireless infrastructure (WiFi, Bluetooth, ultra-wideband) to track people and assets. LiDAR remains an ideal technology for these sensors because its high fidelity is maintained regardless of lighting conditions and, unlike video, maintains citizen privacy. The result is a platform that is autonomous and scalable, and operates in real time to deliver 3D models and 2D maps while incorporating location and positioning within a single solution. These sensors are small and can be deployed on top of a building’s existing infrastructure—not dissimilar to a WiFi router.

The advantage of this approach is that the platform is able to capture the data needed to create 3D models of the indoor space while also understanding the sensors’ own positions relative to each other. Each sensor captures point clouds of the same space from different perspectives to create a composite point cloud that is automatically converted into a single structured model. This solves the two critical roadblocks that industry has faced when trying to acquire indoor data: Maps no longer need to be created/updated by professional services teams, and the maps and positioning data are always integrated and updated in real time.

The sensors track electromagnetic signals from people and assets within the space. This approach respects citizen privacy by capturing a unique identifier rather than personal information. Algorithms eliminate redundant data (such as signals from computers or WiFi routers) when identifying humans within a space and model the traffic patterns and behaviors over time. The data includes the person’s or asset’s longitude and latitude coordinates, along with altitude—which, as more people live and work in high-rise buildings, is becoming increasingly necessary for emerging enhanced 911 requirements in the United States. The scalability and real-time nature of a platform-based approach results in a stream of data that can be used to drive a variety of applications, including wayfinding, evacuation planning, training and simulation scenarios, airport security, and more.

Integrating Indoor and Outdoor 3D

Accurate correlation in four dimensions will drive the framework for future information transfer and corroboration. Fixed objects at a point in time must be properly located for all of their properties to be associated. This work is more challenging than it might appear. Many objects look very similar, and various sensors have differing levels of resolution and accuracy—bleed-over of attribution and misattribution of properties is possible. The better the 3D base layer, or foundation, the more likely all scene elements will be properly defined. Once objects move within the scene, the correlation of observables, initial position, and the changes to it often allow inference of intent or purpose.

Connecting data from outside to inside to deliver a seamless experience has yet to be solved, although there is progress. By capturing indoor 3D quickly and in real time, the opportunity to integrate it with outdoor 3D models is now possible. We expect the integration of 3D and its related positioning data will soon be ubiquitous regardless of where a person is located. In areas where data providers can work together, the same approach used by Google to track traffic could allow for the establishment of routes from outdoor to indoor and vice versa to evolve rapidly. Companies creating the 3D data are defining the standards, and, as more data becomes available, accessing information can be as easy as “get.location” for software developers creating outdoor navigation apps. A centralized database with established formats, standards, and access protocols is recommended to ensure that analysts and developers work with the same datasets and that decisions and insights are derived from the same foundation, no matter where stakeholders are located.

3D Accessibility for Success

As it becomes easier to quickly and cost-effectively create and integrate indoor and outdoor 3D data, managing how that information is stored and accessed will be the next opportunity for the geospatial community. In order for 3D to be truly valuable, it must be easily—if not immediately—accessible for today’s devices. Ensuring 3D can be captured in real time will drive the need to deliver it quickly and across a wider variety of applications. A smart compression and standardization strategy is critical to the portability of the information. As the use of 3D by consumers increases, there will be more natural demand for ready access from user devices, which will help streamline and optimize applications (as it has for 2D mapping over the last decade).

Applying 3D to the real world, in real time, provides:

  • Improved situational awareness to users from their own devices.
  • Seamless wayfinding from outdoors to indoors.
  • Exceptionally detailed and portable data for military/emergency planners and operators.
  • Readily available data and web access for first responders and non-governmental organizations.
  • Global GPS-denied navigation capability for mission critical systems (e.g., commercial flight avionics).
  • A globally accurate positioning grid immediately available for analysis.

Moving Forward

3D is ready to play a bigger role in how we experience the world. The manipulation of location in 3D should be as natural as controlling a video game console. As long as the GEOINT Community keeps in mind what has been learned from both 2D mobile mappers and gaming aficionados, the move into the Z-dimension should prove as easy as it is worthwhile. Moving forward, 3D data developers and users have an important role to play—to provide feedback on what “feels” natural, and what doesn’t. After all, that’s what reinserting the third dimension is all about.

Headline image: The Great Pyramid of Giza, from 3D compositing of DigitalGlobe imagery. Courtesy of Vricon.

The post The Tipping Point for 3D appeared first on Trajectory Magazine.

]]>
35748
Uncloaking Adversaries through GIS http://trajectorymagazine.com/uncloaking-adversaries-gis/ Mon, 18 Dec 2017 22:02:45 +0000 http://trajectorymagazine.com/?p=35707 Using GIS for Activity Based Intelligence

The post Uncloaking Adversaries through GIS appeared first on Trajectory Magazine.

]]>
In modern conflicts, adversaries hide in plain sight. Their intentions are often disguised in overwhelming volumes of data. Intelligence organizations are implementing Activity Based Intelligence (ABI) to uncloak these adversaries. ABI applies geographic thinking in new ways to help solve today’s complex intelligence problems.

At any given moment, every person, thing, place, or activity is connected to place and time. This spatiotemporal data is essential for intelligence. It is captured by sensors, transactions, and observations, which intelligence analysts can bring together in a Geographic Information System (GIS). GIS manages data that’s critical to discovering the unknown. GIS analytic tools transform data into intelligence, which drives action.

Today’s challenge is scaling out this workflow to the entire intelligence community, moving ABI practices from small, resource-rich teams to put them in the hands of every analyst. This will require organizations to harness the latest advances in technology and bring order to the growing volume of intelligence data.

Space/Time Data Conditioning

Activity data comes from a variety of sources and requires specific conditioning before it is useful. Location is the only common value across the data sources and can be used to integrate disparate data sources. GIS allows analysts to define rules to integrate multiple data sources. Integrating data shows patterns where none previously existed.

Insights combined (Photo credit: Esri)

A large volume of intelligence comes through manual intelligence reporting. These techniques might be Imagery Intelligence (IMINT) specialists watching drones or Human Intelligence (HUMINT) agents in the field. Structured observations become data points, which are integrated along with all the other sensor data, adding to the known intelligence data. After geo-referenceing, the data can be geo-enriched. This process connects observation data with foundation intelligence. Foundation intelligence provides context about the environment where these activities occur: cultural factors like religion, language, and ethnicity; the physical environment—urban or rural; and known locations like safe houses and meeting places. Connecting observations with known information gives critical context and helps find the suspicious activity within all the normal activity occurring around it.

Enabling a Spatiotemporal Data Environment

As the functional manager for geospatial, NGA, hosts the Intelligence Community GIS Portal (IC GIS Portal). While Sue Gordon was deputy director of NGA she stated “The IC GIS Portal, which is our first GEOINT service and provides easy access to NGA data, is now about two years old. Within that time, we’ve grown from zero users to almost 60,000 users worldwide.” This demonstrates the power of this data environment for supporting intelligence analysis, access to foundation GEOINT, and simple collaboration/sharing.

Access to this foundation intelligence and analyst reporting is critical to enabling ABI workflows. In addition, the GIS platform capabilities are expanding with cloud- based applications and services for real-time and big data analytics. GIS can connect to machine learning and artificial intelligence (AI) systems to assist in automated intelligence production and alerting. Imagery will be connected to these systems to enable on-the-fly analysis and production. As new data types are integrated, analysts will be able to spend less time on data conditioning and more time on analysis.

Enabling the Analyst

Ultimately, for ABI workflows to be successful, analysts need to be able to use their cognitive ability to make connections in the data. They need to leverage a powerful suite of analytic tools and visualization capabilities to make sense of data. They need to be able to create products which expose their analysis along with the underlying data and workflows.

Integrate intelligence information using space and time (Photo credit: Esri)

ABI takes a discovery approach to building intelligence. Rather than merely providing updates on current intelligence, the ABI method calls for integrating all-source intelligence with other data to discover and monitor relevant information. With integrated data, analysts can discover a threat signature or indicator otherwise not discernable. 

GIS provides visualization and analytic tools for working with intelligence data. Maps, the foundational of a GIS, can be used to understand complex patterns and visualize the spatial importance and relevance of data at a specific time. Geospatial analysis tools can be used to find statistically significant patterns in the data and to help predict future outcomes. Analysts use these visualizations to explore data and to develop assessments.

Implementing an Activity Based Intelligence Platform

ABI is emerging as a formal method of discovery intelligence. With ABI’s foundation in spatial thinking, GIS technology is a key enabler. An enterprise GIS creates a spatiotemporal data environment capable of connecting analysts with foundation intelligence data and applications for analysis and production. These tools have been implemented successfully in many organizations and have proved to scale to even the most complex agencies.

  • If you are interested in learning more about how to leverage GIS technology and ABI methodologies in your organization, visit go.esri.com/FutureIntel2018 or contact us at Intelligence@esri.com.

The post Uncloaking Adversaries through GIS appeared first on Trajectory Magazine.

]]>
35707
Weekly GEOINT Community News http://trajectorymagazine.com/weekly-geoint-community-news-30/ Mon, 06 Nov 2017 16:15:34 +0000 http://trajectorymagazine.com/?p=35378 Boundless Moves Headquarters to St. Louis; DigitalGlobe and Orbital Insight Expand Partnership; Planet Doubles Capacity to Capture Sub-One Meter Imagery

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

]]>
Boundless Moves Headquarters to St. Louis

GIS company Boundless announced it will move its headquarters from New York City to St. Louis. The location change is a response to the explosion of the geospatial market in St. Louis (including its significant NGA presence) and a move to bolster the company’s recruiting efforts. Boundless plans to expand from approximately 100 employees to 150 in the next two years by leveraging the talent emerging from St. Louis area universities. Additionally, St. Louis has been recognized as a top startup city in America.

DigitalGlobe and Orbital Insight Expand Partnership

DigitalGlobe and analytics firm Orbital Insight expanded their existing data partnership in pursuit of global-scale analytic solutions to help businesses make better-informed decisions. Orbital Insight will become a partner on DigitalGlobe’s Geospatial Big Data (GBDX) platform to improve the resolution and spatial coverage of the data interpreted by the system. DigitalGlobe will grant Orbital Insight access to its time-lapse image library, which Orbital will use to refine its own consumer data analysis offering.

Planet Doubles Capacity to Capture Sub-One Meter Imagery

Planet made its twentieth satellite launch last week, sending six SkySats and four Doves aboard an Orbital-ATK rocket destined for a sun synchronous 500km orbit. The new SkySat constellation will double Planet’s capacity to capture sub-one meter imagery. The fleet was sent to an afternoon crossing time (as opposed to the typical morning crossing time) to capture new and diverse datasets.

Peer Intel

The MITRE Corporation appointed Samuel S. Visner the new leader of the National Cybersecurity Federally Funded Research and Development Center (NCF). Visner joins MITRE from ICF International, where he served as senior vice president for cybersecurity and resilience. He’ll continue NCF’s recent efforts to streamline operations and expand academic and cybersecurity partnerships.

Photo Credit: NextSTL

The post Weekly GEOINT Community News appeared first on Trajectory Magazine.

]]>
35378
Predicting Prosperity http://trajectorymagazine.com/predicting-prosperity/ http://trajectorymagazine.com/predicting-prosperity/#comments Fri, 03 Nov 2017 14:55:51 +0000 http://trajectorymagazine.com/?p=35365 AI program predicts area incomes from space

The post Predicting Prosperity appeared first on Trajectory Magazine.

]]>
Penny is an artificial intelligence (AI) program built by Stamen Design and researchers at Carnegie Mellon University on top of GBDX, DigitalGlobe’s analytics platform, to predict neighborhood income levels based solely on satellite imagery. Penny analyzes the shapes and structures that make up city blocks and labels neighborhoods as high, median, or low-income areas.

To train Penny, the team began by using financial data from the U.S. Census Bureau to color-coordinate tiles around cities, essentially creating citywide income maps. Those maps were laid over corresponding DigitalGlobe satellite imagery to create a correlation between urban building types and income levels. With those two datasets, the AI’s neural network learned over time how to distinguish features of wealth from features of poverty.

After running through these datasets, Penny can predict income levels with an average accuracy of 86 percent. Baseball diamonds, parking lots, and large, identical buildings like housing projects indicate lower income areas. Single-family homes and apartments are found mostly in middle-high or middle-low income areas, depending on other factors like freeways and trees. High-rise buildings, homes with large yards, and parks mark high-income neighborhoods.

Penny has implications beyond simple pattern recognition. It’s intended to challenge our understanding of how place and people’s well-being relate, and is a bold step toward answering large-scale, abstract questions using non-human intelligence. In the future, AI programs inspired by Penny could help urban planners build smarter, more efficient, and more beautiful cities.

Users can test Penny for themselves, dragging and dropping individual features (a pool, an airplane, an apartment high-rise) into a small region of the virtual cityscape to see how they affect the algorithm’s understanding of the area’s income. The AI is currently available for free use with imagery of New York and St. Louis.

Photo Credit: DigitalGlobe

The post Predicting Prosperity appeared first on Trajectory Magazine.

]]>
http://trajectorymagazine.com/predicting-prosperity/feed/ 1 35365