AES E-Library

Word based end-to-end real time neural audio effects for equalisation

Audio production, typically involving the use of tools such as equalisers and reverberators, can be challenging for non-expert users due to the intricate parameters inherent in these tools’ interfaces. In this paper, we present an end-to-end neural audio effects model based on the temporal convolutional network (TCN) architecture which processes equalisation based on descriptive terms sourced from a crowdsourced vocabulary of word labels for audio effects, enabling users to communicate their audio production objectives with ease. This approach enables users to express their audio production objectives in descriptive language (e.g., "bright," "muddy," "sharp") rather than relying on technical terminology that may not be intuitive to untrained users. We experimented with two word embedding methods to steer the TCN to produce the desired output. Real-time performance is achieved through the use of TCNs with sparse convolutional kernels and rapidly growing dilations. Objective metrics demonstrate the efficacy of the proposed model in applying the appropriately parameterized effects to audio tracks.

 

Author (s):
Affiliation: (See document for exact affiliation information.)
AES Convention: Paper Number:
Publication Date:
Session subject:
Permalink: https://aes2.org/publications/elibrary-page/?id=22262


(350KB)


Download Now

Click to purchase paper as a non-member or login as an AES member. If your company or school subscribes to the E-Library then switch to the institutional version. If you are not an AES member Join the AES. If you need to check your member status, login to the Member Portal.

Type:
E-Libary location:
16938
Choose your country of residence from this list:










Skip to content