City Research Online

Anti-transfer learning for task invariance in convolutional neural networks for speech processing

Guizzo, E., Weyde, T. ORCID: 0000-0001-8028-9905 & Tarroni, G. ORCID: 0000-0002-0341-6138 (2021). Anti-transfer learning for task invariance in convolutional neural networks for speech processing. Neural Networks, 142, pp. 238-251. doi: 10.1016/j.neunet.2021.05.012


We introduce the novel concept of anti-transfer learning for speech processing with convolutional neural networks. While transfer learning assumes that the learning process for a target task will benefit from re-using representations learned for another task, anti-transfer avoids the learning of representations that have been learned for an orthogonal task, i.e., one that is not relevant and potentially confounding for the target task, such as speaker identity for speech recognition or speech content for emotion recognition. This extends the potential use of pre-trained models that have become increasingly available. In anti-transfer learning, we penalize similarity between activations of a network being trained on a target task and another one previously trained on an orthogonal task, which yields more suitable representations. This leads to better generalization and provides a degree of control over correlations that are spurious or undesirable, e.g. to avoid social bias. We have implemented anti-transfer for convolutional neural networks in different configurations with several similarity metrics and aggregation functions, which we evaluate and analyze with several speech and audio tasks and settings, using six datasets. We show that anti-transfer actually leads to the intended invariance to the orthogonal task and to more appropriate features for the target task at hand. Anti-transfer learning consistently improves classification accuracy in all test cases. While anti-transfer creates computation and memory cost at training time, there is relatively little computation cost when using pre-trained models for orthogonal tasks. Anti-transfer is widely applicable and particularly useful where a specific invariance is desirable or where labeled data for orthogonal tasks are difficult to obtain on a given dataset but pre-trained models are available.

Publication Type: Article
Additional Information: © 2021. This manuscript version is made available under the CC-BY-NC-ND 4.0 license
Publisher Keywords: Audio processing; Convolutional neural networks; Invariance transfer; Transfer learning
Subjects: P Language and Literature > P Philology. Linguistics
Q Science > QA Mathematics > QA75 Electronic computers. Computer science
R Medicine > RC Internal medicine > RC0321 Neuroscience. Biological psychiatry. Neuropsychiatry
Departments: School of Science & Technology > Computer Science
Text - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (1MB) | Preview



Downloads per month over past year

View more statistics

Actions (login required)

Admin Login Admin Login