Over the last 20+ years, Artificial Intelligence and Machine Learning in Audio has changed dramatically, and the field has exploded in all vertical markets. Smart audio products span virtually all categories, and integrated AI and ML are used to power interactive functions, whether device embedded or cloud deployed. The Audio Product Education Institute (APEI) Artificial Intelligence (AI) and Machine Learning (ML) education pillar will explore and explain the practical approach and development of all the products that are conceived, planned, designed and implemented in various smart audio market segments.
Expanding on its mission to improve the knowledge and skills of professionals involved in the development of audio products, from concept to delivery, APEI is proud to present a new series of webinars focusing on the technologies enabling smart audio products, intended to support this extremely fast growing area, were development teams are in great demand.
In this APEI event, Steven Willenborg, Artificial Intelligence and Machine Learning pillar chair and Vice President of Sales at Linkplay Technology, will provide an introduction to the product development process, and explore the tools and technologies for designing and integrating AI & ML into products.
Artificial intelligence in audio products is becoming a foundational concept of the actual product design, features, and user experience. By delivering capabilities such as voice analysis, noise reduction, and sound recognition, smart products have become success stories. Likewise, by adding adaptive audio processing and high quality analysis powered by AI at the edge, completely new possibilities are now paving the way to new user cases and market opportunities in wearables and automotive sound.
Jeff Rogers the Co-Founder & VP of Sales at Sensory will be the guest presenter in this webinar, explaining the potential of AI and ML on the edge. His presentation details how product developers scale processing platforms and manage constraints such as power consumption for appliances or wearables. Jeff will introduce some of Sensory’s key technologies for voice interfaces completely implemented on-device, and the development of new voice UIs. He will also explain how deep learning voice AI is finding its way into everything from appliances to toys.
Sensory is a Silicon Valley company pioneering AI at the edge and the de facto standard for enabling a voice UI on apps and devices. For more than 25 years, Sensory has pioneered and developed machine learning and embedded AI applications, pioneering the concept of always-listening speech recognition more than a decade ago. Sensory is known for TrulyHandsfree, the company’s widely-deployed wake word engine for voice assistants, and TrulyNatural, a large vocabulary speech recognition and natural language understanding platform, designed for home appliances. Recently, Sensory unveiled VoiceHub, a new online portal that enables developers to quickly create wake word models and voice control command sets for prototyping and proof-of-concept purposes. VoiceHub empowers developers with free tools to easily select languages and model sizes through drop down menus, while its intuitive interface allows to quickly build vocabularies in dozens of languages with no coding experience required.
Sensory believes that not all applications require Internet-based voice assistant platforms and that privacy is key for consumers’ acceptance. But so is reliability. For such requirements Sensory now offers its fully edge-based Custom Voice Assistants solution that offers total privacy.