The Necessity of GPUs

Resolutions have surpassed Moore’s Law: What does that mean for data analysis?

GlobalHawk-2

Editor’s note: Sean Varah is CEO and Founder of MotionDSP, a Silicon Valley-based company making advanced video enhancement and computer vision software used by commercial industries, law enforcement, and the military.

A new class of next-generation, high-resolution aerial sensors is coming online in the next few years. Northrop Grumman recently tested United Technology’s SYERS-2 sensor on Global Hawk. Whether installed on satellitesmanned aircraft, or drones, these new sensors will collect much more data than their predecessors. This data will require the analytics community to take advantage of the very latest in commercial computing technology. Are you using the right computing architecture?

If you are designing local or cloud-based image processing systems and are not planning to use GPUs, you have some major obstacles ahead of you. Traditional CPUs have a substantial scaling problem with real-time image processing or computer vision algorithms such as detection and tracking. CPUs have already been outclassed by GPUs, which are far better suited to the massive processing requirements of high-definition video and high-resolution imagery.

Resolutions are Increasing Faster than Moore’s Law

From 2009 to 2014, video resolutions increased from SD (640×480) to 4K (3840×2160). That’s 27 times more raw pixels. In the same period of time, the processing power of GPUs increased by 33x, but Moore’s Law saw CPUs increase by only 10.3x.

Image1 med

Wide-area motion imagery (WAMI) sensor resolutions are going even further. Harris’ CorvusEye is 115 megapixels (about 10,720 x 10,720) and PV Labs’ PSi-ViSiON 3000 system is 300+ megapixels (about 17,320 x 17,320).

MotionDSP has used GPUs to accelerate our image processing and computer vision software for more than eight years. In 2008, we saw our image processing algorithm hit a wall with multi-core CPUs. So we started working with NVIDIA’s CUDA SDK, which allows software to use GPUs for computing. We immediately saw performance over CPUs increase five- to 10-fold, and more importantly, GPUs enabled our software to work in real time on live video feeds. That simply wasn’t possible with CPUs. Since then, every 18-24 months, NVIDIA has released new GPU cards that are twice as powerful—easily keeping pace with the increase in video and image resolutions.

Airborne + Real-time Applications = GPU Required

If your application requires real-time processing, CPUs will not make the cut. Take, for example, performing target detection and tracking from WAMI sensors made by companies such as Harris and PV Labs.

Using a powerful GPU, MotionDSP’s Ikena ISR software can do live detection and tracking on 8K imagery at two frames per second. Switching to CPU processing, that figure drops to 0.146 frames per second—13.7 times slower than the GPU.

Image 2

In video processing, when you run into a wall like this, the traditional approach is to divide the image into smaller tiles and process them on separate CPUs. This presents two obstacles:

  1. It’s slow. Dividing the image into tiles, processing separately, and then gathering the results and recombining them creates an enormous amount of computational/data overhead that results in unacceptable latency. Processing the full frame on a single GPU has a huge advantage.
  2. Enormous SWAP (size, weight and power). Using the previous example, MotionDSP’s WAMI software can process an 8K image stream in real time on a single GPU, in a single server. On the CPU, that processing is 13.7x slower, so 14 CPUs would be needed to achieve real time. Even using dual-CPU servers would still require seven servers compared to one.

image3 medium

Will seven servers fit in an aircraft? Do you have 3500 watts to spare? Maybe on a Joint Stars, but not a Reaper.

Even if you don’t require real-time processing and your use case is batch processing of imagery after the aircraft lands, CPUs still don’t scale.

New aerial and space sensors are here, but are your software and hardware prepared to process all that data? If you are processing video or imagery, you should be configuring new workstations or servers with the largest GPUs possible. Otherwise, you are building an obsolete system. GPUs are mature, proven, commercial, and incredibly cost-effective for the TeraFLOPS of compute they offer.

Photo Credit: U.S. Air Force

Posted in: Contributed   Tagged in: Analysis, Cloud Computing, Software

,

USGIF Announces Three New Scholarships Sponsored by St. Louis Organizations

The multi-year scholarships are sponsored by Greater St. Louis, Inc., the Globe Building, and Westway Services Group, LLC

,

USGIF Announces 2023 Lundahl-Finnie Lifetime Achievement Award Recipient

The Honorable Gilman Louie will receive the Arthur C. Lundahl-Thomas C. Finnie Lifetime Achievement Award at GEOGala on Nov. 30

,

USGIF Releases New White Paper: The Evolving Role of Synthetic Data in GEOINT Tradecraft

Synthetic data generation has emerged as a keystone technology to address many needs and challenges