City Research Online

An RNN-based Music Language Model for Improving Automatic Music Transcription

Sigtia, S., Benetos, E., Cherla, S. , Weyde, T., Garcez, A. & Dixon, S. (2014). An RNN-based Music Language Model for Improving Automatic Music Transcription. In: Wang, H-M, Yang, Y-H & Lee, JH (Eds.), http://www.terasoft.com.tw/conf/ismir2014//proceedings%5CISMIR2014_Proceedings.pdf. 15th International Society for Music Information Retrieval Conference (ISMIR), 27-10-2014 - 31-10-2014, Taipei, Taiwan.

Abstract

In this paper, we investigate the use of Music Language Models (MLMs) for improving Automatic Music Transcription performance. The MLMs are trained on sequences of symbolic polyphonic music from the Nottingham dataset. We train Recurrent Neural Network (RNN)-based models, as they are capable of capturing complex temporal structure present in symbolic music data. Similar to the function of language models in automatic speech recognition, we use the MLMs to generate a prior probability for the occurrence of a sequence. The acoustic AMT model is based on probabilistic latent component analysis, and prior information from the MLM is incorporated into the transcription framework using Dirichlet priors. We test our hybrid models on a dataset of multiple-instrument polyphonic music and report a significant 3% improvement in terms of F-measure, when compared to using an acoustic-only model.

Publication Type: Conference or Workshop Item (Paper)
Subjects: M Music and Books on Music
Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Departments: School of Science & Technology > Computer Science
Related URLs:
[thumbnail of ISMIR2014.pdf]
Preview
PDF - Accepted Version
Available under License Creative Commons: Attribution International Public License 4.0.

Download (172kB) | Preview

Export

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Downloads

Downloads per month over past year

View more statistics

Actions (login required)

Admin Login Admin Login