Advances in graphic processing units (GPUs) for powering lifelike 3D graphics and artificial intelligence are making it possible to see and understand our changing world with much greater clarity. NVIDIA’s upcoming annual GPU Technology Conference in Washington D.C. will bring together industry professionals, government stakeholders and regional tech trailblazers next week (October 22-24) to explore and chart the future of GPU hardware and applications.

Our participation builds on our collaboration with NVIDIA and provides an exciting opportunity to demonstrate how Radiant (and our customers) are harnessing GPU technology to help solve unique problems geospatially and reveal insights where and when it matters.

Maxar is a global provider of vertically integrated capabilities and expertise including satellites, Earth imagery, robotics, geospatial data and analytics. We help government and commercial customers anticipate and address their most complex mission-critical challenges with confidence. Key to Maxar’s robust portfolio of solutions are its strategic partnerships with industry leaders like NVIDIA.

Maxar's EVP of Global Field Operations, Tony Frazier, and the National Geospatial-Intelligence Agency’s (NGA) William R. “Buzz” Roberts will talk about “Harnessing Artificial Intelligence, Automation and Augmentation (AAA) to Build a Better World,” on October 23 at 3:30 p.m. ET. They will share success stories about how AAA, enabled by deep learning on NVIDIA GPUs, is being applied to augment a variety of NGA missions and is helping close time gap between data collection and decision-making.

We will also showcase products and capabilities enabled by GPUs that help our customers see the world in new ways through virtual and augmented reality, apply machine learning to see useful patterns in massive amounts of data, and perform geospatial processing analytics at a greater speed, scale and complexity than previously possible.


Decision-makers have coveted virtual reality (VR) technology for decades. Until about five years ago, VR’s limited utility and prohibitive cost meant early users were frequently underwhelmed. Fortunately, the explosive growth in consumer VR applications for gaming has resulted in the commercialization of VR hardware, applications and GPU advancement at a breakneck pace—making VR technology affordable and readily available.

Based on a geospatially-enabled gaming engine, CityBox makes it easy to import geospatial data, explore and analyze that data, and export the findings. CityBox usually runs in an immersive VR mode, but it also supports augmented reality (AR) glasses and a basic desktop mode. We designed CityBox to be a cost-effective, flexible and powerful tool that enables immersive mission planning, training and analysis without ever putting people on the ground.

Imaging through Video Turbulence (IVT)

For the last two decades, scientists and engineers have been developing and improving software techniques that dramatically improve the usability of video taken under challenging conditions—including atmospheric turbulence due to heating and water vapor, low-light situations and precipitation. This is important because the optical sensors used across a variety of applications (from autonomous vehicles to airborne sensors) are inevitably subject to video noise from movement and environmental conditions making it more difficult to accurately analyze the imagery.

Maxar's successful “imaging through video turbulence” (IVT) algorithm implementation on NVIDIA GPUs has taken these capabilities out of the laboratory and into the field. Our IVT capabilities enable users to overcome significant limitations in source data to produce clear imagery they can trust to perform accurate analysis and facial and character recognition. in support of decision-making.

Machine Learning

Maxar is harnessing advances in machine learning, specifically deep learning, to help our users analyze data at an unprecedented velocity and global scale in support of time-sensitive missions like humanitarian assistance and disaster response. In addition to automating foundational mapping tasks, the output from our deep learning algorithms can be used to aid the characterization of human activity over time. For example, after counting all transportation objects like planes, trains and automobiles, we can create a baseline for human activity by viewing the results in aggregate. This information can be used to supplement foundational human geography data sources like economic, demographic or market information and provide valuable context for decision-makers.

Above, one of our computer vision models depicts objects identified within Maxar’s satellite imagery. The DeepCore SDK toolkit allows users to download and manipulate the objects as geospatial vector files and classify them into object categories. NVIDIA GPUs using CUDA technology (in “GPU Mode”) enables rapid object detection—resulting in faster, more efficient processing of large geographic areas. In the future, our goal is to automatically increase the accuracy of our algorithms as they are deployed in the field, as well as to increase analyst efficiency by continuously monitoring a multitude of objects over geographically diverse areas.


Rapid response—that’s what is expected from anyone who serves a critical mission. Data too large to enable a rapid response just isn’t used, and if the compute takes too long it simply isn’t attempted. Analysts using terrain data in support of critical missions ranging from disaster response to national security have been very limited by data size and compute. Our technical experts have been working with GPUs to solve these geospatial high-performance computing challenges for almost 15 years.

Maxar's xTerrain Analytics leverage NVIDIA GPUs to tackle this challenge. GPUs have increased in performance 25x over the last 5 years, and they have a variety of software libraries that allow us to make better use of this hardware. In addition, more GPUs can now be provisioned from the cloud as needed to tackle the processing required for xTerrain’s intensive analytic tasks.

We have a lot of exciting geospatial solutions to share at GTC DC. We hope you will attend our presentation and visit us on the exhibit floor at Booth 113.

Thanks to the Maxar team members who also contributed to this blog: Nick Deliman for the CityBox section; Dr. Russell Sieron for IVT; Kevin McGee for Machine Learning; Jim Stokes for xTerrain; and Ryan Smith for their input.

Prev Post Back to Blog Next Post