Deep Learning for Loudspeaker Digital Twin Creation

Louise B, Kerimovs T, Schlecht SJ (2023)


Publication Type: Conference contribution

Publication year: 2023

Publisher: Audio Engineering Society

Conference Proceedings Title: AES Europe 2023: 154th Audio Engineering Society Convention

Event location: Espoo, Helsinki, FIN

ISBN: 9781713877783

Abstract

Several studies have used deep learning methods to create digital twins of amps, speakers, and effects pedals. This paper presents a novel method for creating a digital twin of a physical loudspeaker with stereo output. Two neural network architectures are considered: a Recurrent Neural Network (RNN) and a WaveNet-style Convolutional Neural Network (CNN). The models were tested on two datasets containing speech and music, respectively. The method of recording and preprocessing the target audio data addresses the challenge of lacking a direct output line to digitize the effect of nonlinear circuits. Both model architectures successfully create a digital twin of the loudspeaker with no direct output line and stereo audio. The RNN model achieved the best result on the music dataset, while the WaveNet model achieved the best result on the speech dataset.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Louise, B., Kerimovs, T., & Schlecht, S.J. (2023). Deep Learning for Loudspeaker Digital Twin Creation. In AES Europe 2023: 154th Audio Engineering Society Convention. Espoo, Helsinki, FIN: Audio Engineering Society.

MLA:

Louise, Bryn, Teodors Kerimovs, and Sebastian J. Schlecht. "Deep Learning for Loudspeaker Digital Twin Creation." Proceedings of the AES Europe 2023: 154th Audio Engineering Society Convention, Espoo, Helsinki, FIN Audio Engineering Society, 2023.

BibTeX: Download