City Research Online

Incremental Adaptation Strategies for Neural Network Language Models

Ter-Sarkisov, A., Schwenk, H., Barrault, L. & Bougares, F. (2015). Incremental Adaptation Strategies for Neural Network Language Models. In: Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality. doi: 10.18653/v1/W15-4006

Abstract

It is today acknowledged that neural network language models outperform backoff language models in applications like speech recognition or statistical machine translation. However, training these models on large amounts of data can take several days. We present efficient techniques to adapt a neural network language model to new data. Instead of training a completely new model or relying on mixture approaches, we propose two new methods: continued training on resampled data or insertion of adaptation layers. We present experimental results in an CAT environment where the post-edits of professional translators are used to improve an SMT system. Both methods are very fast and achieve significant improvements without overfitting the small adaptation data.

Publication Type: Conference or Workshop Item (Paper)
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Q Science > QA Mathematics > QA76 Computer software
Departments: School of Science & Technology > Computer Science
[thumbnail of W15-4006.pdf]
Preview
Text - Published Version
Available under License Creative Commons Attribution.

Download (152kB) | Preview

Export

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Downloads

Downloads per month over past year

View more statistics

Actions (login required)

Admin Login Admin Login