GPU powered audio has long been considered something of a unicorn in both the pro audio & accelerated computing industries alike. The implications of powering accelerated DSP via a GPU’s parallel architecture is simultaneously exciting and incredibly frustrating; to many it would seem that the ease of which they handle massive amounts of tasks is rivalled only by the difficulty of understanding their architecture, in particular for the average DSP developer. Until now, the state of research has always concluded that because of heavy latency and a myriad of computer science issues, DSP on GPUs was just not possible nor preferable. This is no longer the case.
The implications and use-cases are great: ultra fast plugins, scalable power, hundreds or even thousands of channels at low latency, exponentially better software performance (10x-100x), cloud processing infrastructure, accelerated AI/ML and more. GPUs can now offer a bright future for DSP. In this talk we will share about the challenges and solutions of GPU based DSP acceleration.
- Why GPUs?
- 3 Challenges of GPU-based Audio Processing
- Parallelism and Heterogeneity
- Multiple Tracks and Effects
- Data Transfer Problems: GPU <> CPU
- Core Component Overview: The Scheduler
- Host Scheduler and Device Scheduler
- How Scheduler Addresses the “3 Challenges”
- Some Examples: FIR and IIR Algorithms - Can They Be Parallelized?
Algorithmic and Platform Optimization
GPU Audio Workflow Schematics
- GPU Audio Component
- DSP API
- Processor API
- DSP Components Library
- Roadmap and Some Use Case Considerations
- Q&A and Invitation to Training Lab (Gain, IIR and FIR Convolver Hands-On Training Lab)
IF YOU ARE ATTENDING ONLINE, ALL TALK SESSIONS CAN BE ACCESSED FROM THE MAIN LOBBY:
https://conference.audio.dev