CUDA Spotlight: Jon Rogers

Jon Rogers

GPU-Accelerated Guidance and Control for Robotic Systems

This week's Spotlight is on Jon Rogers, Assistant Professor in the Department of Aerospace Engineering at Texas A&M University. Jon is director of the Helicopter and Unmanned Systems Lab, where he works on new technologies for autonomous systems.

Jon is currently exploring new algorithms and sensing technologies to increase task complexity of robotic devices. His research encompasses the fields of nonlinear dynamics, robust control, and high-performance computing. This interview is part of the CUDA Spotlight Series.

Q & A with Jon Rogers

NVIDIA: Jon, tell us about your robotics research at Texas A&M.
Jon: Today’s robotic systems are limited in the complexity of tasks they can perform. Task complexity can be defined as a robot’s ability to make decisions under significant uncertainty about its environment or its own dynamic characteristics. Current guidance and control methods for robotic systems do not enable tasks nearly as complex as those that humans can perform.

In a broad sense, our research focuses on creating new technologies that will revolutionize task complexity for robots and autonomous systems. We are looking at this problem from multiple perspectives. First, we are exploring low-cost sensors that mimic sensor perception techniques used by insects and other biological systems. Also, we are examining new ways to quantify uncertainty in real-time and make use of it in feedback control to make better decisions in uncertain environments.

Coupling uncertainty quantification (UQ) with feedback control provides a powerful capability. GPUs are the tools that allow us to quantify uncertainty in real-time − we can run hundreds or thousands of dynamic predictions in the time it used to take to run one. Now robots can base control inputs on a rich set of predictions that incorporate all the uncertainty they are faced with. Furthermore, through GPU-based uncertainty propagation, they can evaluate future risk associated with each action at the current time.

NVIDIA: What is the charter of the Helicopter and Unmanned Systems Lab (HUSL)?
Jon: Our charter is to develop the next generation of robotic systems through advancements in nonlinear control, sensing technology, and vehicle design. We have historically focused on vertical lift robotics, but our research is increasingly broad-based − everything from vehicle energy harvesting and obstacle avoidance to sensor fault detection and isolation (FDI).

A unique aspect of our lab is the emphasis on taking a project from theoretical conception through simulation to experiment. Many of our projects begin as a sketch on paper and culminate in a flight test. We ensure that our research is relevant to the larger community this way, since real-world issues that arise during implementation are dealt with early on in the project.

NVIDIA: Tell us about the parafoil test flights planned for this summer.
Jon: This summer we will be flight testing an autonomous parafoil system equipped with an on-board GPU (CUDA on ARM development board). The GPU will be used to perform massively-parallel trajectory prediction to facilitate obstacle avoidance. This will be a demonstration of the theoretical and simulation work we have done over the past year.

Guided parafoils are commonly used to deliver supplies for humanitarian aid and other missions, but they often miss their targets due to wind gusts and turbulence. Our new guidance scheme creates a candidate set of trajectories from the parafoil’s current location to the target. Using real-time Monte Carlo simulation on a GPU, we evaluate the obstacle collision risk associated with each trajectory given uncertainty in winds. This results in a robust guidance algorithm in which risk due to uncertain winds can be evaluated in real-time, and control decisions made accordingly.

A guided parafoil-payload system
A guided parafoil-payload system

This new paradigm of using massively-parallel processing for collision risk assessment and uncertainty quantification has the potential to be game-changing technology for parafoil systems. Recently we showed how use of this GPU-based guidance scheme can enable accurate landings in some pretty challenging drop zones where winds have historically caused large miss distances.

NVIDIA: What are some key challenges in your field?
Jon: Arguably, the number one problem in the robotics field today is uncertainty management. One example of this arises in human-robot interaction. Human commands are often ambiguous, and a robot must make decisions about what commands mean given substantial uncertainty. Ask anyone who uses voice-recognition software in their car or on their phone.

For aerial robotic systems, the problem is much worse. Uncertainty exists not just in human interaction, but also due to sensing errors, winds, and modeling error. Control systems are expected to achieve a high degree of reliability considering all the uncertainty the aircraft is subjected to. Historically, this field of study has been known as robust control.

Researchers in the aerospace robotics area are now looking at how to leverage high performance computing to allow us to evaluate risk in real time. New robust control algorithms can be created in which risk associated with certain control actions can be accurately quantified for nonlinear systems.

NVIDIA: What role does GPU computing play in your work?
Jon: Essentially, the GPU is a tool that allows us to quantify uncertainty accurately in real-time for robotic devices. Previous robust control algorithms made linear, Gaussian approximations to ensure that control formulations could be executed in real-time. The problem is, these assumptions are usually not valid for many aerospace vehicles and lead to poor performance.

GPUs allow us to propagate non-Gaussian uncertainty for nonlinear systems through Monte Carlo simulation. Monte Carlo simulation is an “embarrassingly parallel” process which yields 1-2 orders of magnitude runtime reduction when implemented on a GPU.

Whereas 10 years ago Monte Carlo simulations were something we did offline in the lab on a CPU, now we can execute them in real-time on the vehicle itself with a GPU. This is a paradigm shift in the robust control community enabled only through low power parallel computing devices. Accurate non-Gaussian uncertainty propagation is now possible for nonlinear systems in real-time due to the availability of these new computing architectures.

We developed our Monte Carlo code directly in CUDA C/C++. The core of this code was developed several years ago, before Thrust and OpenACC were mature. Our current versions use CURAND to generate randomized initial conditions and wind values in each GPU thread. This works quite well and allows us to minimize data transfer from CPU to GPU.

For instance, for the parafoil code we need to generate about 1.4 million random numbers (100,000 trajectories and 14 randomized variables) every time we run a Monte Carlo simulation, which occurs every 2 seconds or so during the parafoil flight. The ability to do this on the GPU directly using CURAND, rather than transferring this data from the CPU, removes a serious bottleneck in the code.

NVIDIA: What problems has CUDA helped you solve?
Jon: CUDA has provided an entry point to GPU programming and execution that is highly compatible with our current guidance and control software. As we search for new ways to incorporate uncertainty quantification in real-time guidance laws, we are naturally drawn to GPU-based Monte Carlo due to its flexibility in handling nonlinear dynamics and non-Gaussian behavior.

We leverage CUDA primarily for parallel trajectory simulation, which means we have developed dynamic models for several vehicles (mostly aircraft) that run within a GPU kernel. Launching thousands of threads means we can run numerous dynamic simulations at once.

CUDA specifically has allowed us to take existing codes and port them to the GPU relatively quickly. The core of the GPU codes we run today were originally built for CPU execution and validated extensively with experimental data. The ability to leverage legacy simulation codes in this manner has been a key enabler. It is also convenient that the same CUDA software we use for our desktop simulation codes can be run on embedded GPUs on-board our robotic vehicles with minimal changes.

NVIDIA: What specific approaches did you use to apply the CUDA platform to your work?
Jon: One specific technique that comes to mind is texture memory interpolation, which we use in path planning for aerial robotic vehicles. Oftentimes we must determine if a candidate path prematurely impacts terrain using an on-board terrain database. For high-resolution terrain data (i.e., in mountainous areas), interpolation along the path may be very time-consuming. This is especially true when evaluating hundreds of candidate paths in real-time. We bind our terrain database to texture memory, which has led to orders-of-magnitude reduction in the time required for terrain interpolation during impact analysis.

Our lab is becoming increasingly interested in embedded GPU hardware as we take these new control laws and port them to vehicles for testing. Some new embedded GPU devices that have been recently released by NVIDIA and others will allow us to do just that. For our research, low power requirements and small size are critical.

NVIDIA: What are some other examples of your research?
Jon: Right now we have a broad set of research initiatives, many associated with vertical lift robotics but also involving other fields. Here is a sample:

  • Development of an autonomous autorotation controller for helicopters in engine-out scenarios.
  • Design of a real-time weight and mass center estimator for rotorcraft.
  • Creation of new information-theoretic algorithms for system identification. (This will explore how we can use tools from information theory to generate better models for dynamic systems with significant uncertainty.)
  • New robust control algorithms for modular, vertical lift robotic systems.

NVIDIA: How did you become interested in this field?
Jon: I have always been fascinated by both aircraft and robotics technology. Fortunately, there are many challenges to be solved in the area of aerial robotics. Much of this work occurs at the intersection of dynamics, controls, and high performance computing.

GPUs are an emerging technology. However, we are at a pivotal moment where researchers are looking at how this technology can be leveraged for more than just graphics processing. I am excited to be on the forefront of research exploring how new emerging computing architectures can solve difficult problems in applied dynamics and robotics.

NVIDIA: Do you have advice for high school students considering a career in engineering?
Jon: Find an engineering subject you are passionate about and stick with it. Learning the math and science fundamentals can be a difficult process, but if you are passionate about the subject you will stick with it. Perseverance is critical.

Now is a great time to be an engineer. There are so many challenges to be solved, and more opportunities than ever to contribute to science and change the world. New technologies can make a difference in people’s lives on a large scale almost universally. It may sound like a cliché, but if you want to have an impact on the world, engineering is the place to do it.

Bio for Jon Rogers

Jon Rogers is an Assistant Professor of Aerospace Engineering at Texas A&M University. He has a passion for all things that are, or can be made, airworthy. When he is not in the lab, he can usually be found running or flying around the skies over Central Texas. Jon will be joining the Woodruff School of Mechanical Engineering at Georgia Tech as an Assistant Professor in Fall 2013.

Relevant Links

Contact Info
jrogers8@gmail (dot) com
701 H.R. Bright Building
3141 TAMU
College Station, TX 77843-3141