Skip to content
Computational Platforms for Automotive Audio Deterministic Audio Processing for Predictive Control

Computational Platforms for Automotive Audio

Deterministic Audio Processing for Predictive Control

February 23, 2021
9:00 AM Pacific (12:00 PM Eastern)

Overview

Expanding on its mission to furthering the knowledge and skills of professionals involved in audio development, the Audio Product Education Institute (APEI) presents a new webinar on Automotive Audio. This session will discuss the features that are common to all kinds of processing, and ones that are unique to audio processing. The most distinctive characteristic for automotive audio applications is the need for processing to be deterministic for predictive control. This becomes more difficult due to some standard computer hardware techniques, such as caching and virtual memory.
Auto manufacturers are always pushing the limits of existing processing platforms to meet the increasing complexities of automotive systems. While they are building features like fully autonomous driving, they are simultaneously innovating on the in-cabin experience: a huge component of which is audio and voice. Immersive sound and personal audio zones, automotive active and road noise cancellation (ANC/RNC), voice-based user-interfaces and in-car communications, engine sound synthesis (ESS) and electric vehicle warning sound systems (EVWSS/AVAS) are just some of the current areas of focus for automotive audio developers.
Digital signal processing (DSP) core technologies need to offer deterministic and very low processing latency with best-in-class MIPS/mW performance. Some processors feature hardware accelerators to offload common digital signal processing algorithms from the core, making them the ideal choice for real-time audio applications. Considerations with complex peripherals such as Ethernet and USB bring another level of demands.
 
This event will be presented by Roger Shively (JJR Acoustics, LLC) APEI’s Automotive Pillar Chair. Following opening remarks, the event will feature two presentations from Paul Beckmann (CTO, DSP Concepts), and John Redford (DSP Architect, Analog Devices), offering a valuable perspective on these platforms.
 
Session 1: Paul Beckmann
Mapping Automotive Audio Workloads to the Appropriate Processors

This presentation will provide an understanding of the primary automotive audio workloads, considering requirements in terms of playback processing, hands-free telephony, voice recognition, road noise cancelation and in-car communications, as well as AVAS sound generation. This overview will be followed by a detailed overview of the strengths and weaknesses of different processors to meet the demands of those applications. Characterization will detail the available application processors, DSPs, MCUs, and processor benchmarks. Mapping workloads to the appropriate processor will be the main focus of the presentation, with details on the effects of the operating system, caching, and other processes. The presentation will also address how will machine learning impact audio processing, with predictions for the next 5 to 10 years.

 
Session 2: John Redford
Distinctive Features of Audio Processing
Audio processing, like most other forms of processing, needs to steadily increase in performance and steadily reduce the cost of its hardware and software components. Its main difference is that it must be deterministic – it must respond in as little as milliseconds and cannot drop blocks without causing noticeable gaps in the output. This goes against most modern techniques: adaptive clock frequency, garbage collection, multi-threading, virtual memory, shared hardware resources, and caching. The upside is that audio memory needs are generally small and programming can be eased by block diagram GUIs.
Paul Beckmann

CTO, DSP Concepts, Santa Clara, CA USA

Paul Beckmann is the founder and CTO of DSP Concepts, a company that specializes in tools and IP for audio product developers. He has many years of experience developing audio products and creating algorithms for audio playback and voice. Paul is passionate about teaching, and has taught industry courses on DSP, audio processing, and product development. Prior to founding DSP Concepts, Paul spent 9 years at Bose Corporation and was involved in R&D and product development activities.
John Redford

DSP Architect, Analog Devices Inc Norwood, MA USA

John Redford is a DSP Architect at Analog Devices. He previously co-founded ChipWrights Inc, a video processor company, and has worked at Pixel Magic, BBN, and Digital Equipment. He has designed about 20 chips in total, holds 17 patents (largely on processor features), and received an MSEE from Stanford University. 

Roger Shively

JJR Acoustics, LLC - Seattle, WA USA

Roger is a Co-founder and Principal of JJR Acoustics. He has over 34 years of experience in engineering research and development, with significant experience in product realization and in launching new products at OEM manufacturers around the world. Before co-founding JJR Acoustics in 2011, Roger worked as Chief Engineer of Acoustic Systems as well as functional manager for North American and Asian engineering product development teams in the Automotive Division of Harman International Industries Inc; a journey that began in 1986.
Roger received his degree in Acoustical Engineering from Purdue University in 1983, and finished post-graduate work in the field of finite element analysis. He is a member of the Audio Engineering Society, Acoustical Society of America, and Society of Automotive Engineering. He has published numerous research papers and articles in the areas of transducers, automotive audio, psychoacoustics, and computer modeling. Roger also holds US and International Patents related to the design of advanced acoustic systems and applications particularly in the field of automotive audio. Roger is Co-Chair of the AES Automotive Audio Technical Committee.