Expanding on its mission to furthering the knowledge and skills of professionals involved in audio development, the Audio Product Education Institute (APEI) presents a new webinar on Automotive Audio. This session will discuss the features that are common to all kinds of processing, and ones that are unique to audio processing. The most distinctive characteristic for automotive audio applications is the need for processing to be deterministic for predictive control. This becomes more difficult due to some standard computer hardware techniques, such as caching and virtual memory.
Auto manufacturers are always pushing the limits of existing processing platforms to meet the increasing complexities of automotive systems. While they are building features like fully autonomous driving, they are simultaneously innovating on the in-cabin experience: a huge component of which is audio and voice. Immersive sound and personal audio zones, automotive active and road noise cancellation (ANC/RNC), voice-based user-interfaces and in-car communications, engine sound synthesis (ESS) and electric vehicle warning sound systems (EVWSS/AVAS) are just some of the current areas of focus for automotive audio developers.
Digital signal processing (DSP) core technologies need to offer deterministic and very low processing latency with best-in-class MIPS/mW performance. Some processors feature hardware accelerators to offload common digital signal processing algorithms from the core, making them the ideal choice for real-time audio applications. Considerations with complex peripherals such as Ethernet and USB bring another level of demands.
This event will be presented by Roger Shively (JJR Acoustics, LLC) APEI’s Automotive Pillar Chair. Following opening remarks, the event will feature two presentations from Paul Beckmann (CTO, DSP Concepts), and John Redford (DSP Architect, Analog Devices), offering a valuable perspective on these platforms.
Session 1: Paul Beckmann
Mapping Automotive Audio Workloads to the Appropriate Processors
This presentation will provide an understanding of the primary automotive audio workloads, considering requirements in terms of playback processing, hands-free telephony, voice recognition, road noise cancelation and in-car communications, as well as AVAS sound generation. This overview will be followed by a detailed overview of the strengths and weaknesses of different processors to meet the demands of those applications. Characterization will detail the available application processors, DSPs, MCUs, and processor benchmarks. Mapping workloads to the appropriate processor will be the main focus of the presentation, with details on the effects of the operating system, caching, and other processes. The presentation will also address how will machine learning impact audio processing, with predictions for the next 5 to 10 years.
Session 2: John Redford
Distinctive Features of Audio Processing
Audio processing, like most other forms of processing, needs to steadily increase in performance and steadily reduce the cost of its hardware and software components. Its main difference is that it must be deterministic – it must respond in as little as milliseconds and cannot drop blocks without causing noticeable gaps in the output. This goes against most modern techniques: adaptive clock frequency, garbage collection, multi-threading, virtual memory, shared hardware resources, and caching. The upside is that audio memory needs are generally small and programming can be eased by block diagram GUIs.