Journal article
Annual Meeting of the Association for Computational Linguistics, 2020
APA
Click to copy
Park, C., Tae, Y., Kim, T., Yang, S., Khan, M. A., Park, L., & Choo, J. (2020). Unsupervised Neural Machine Translation for Low-Resource Domains via Meta-Learning. Annual Meeting of the Association for Computational Linguistics.
Chicago/Turabian
Click to copy
Park, Cheonbok, Yunwon Tae, Taehee Kim, Soyoung Yang, Mohammad Azam Khan, Lucy Park, and J. Choo. “Unsupervised Neural Machine Translation for Low-Resource Domains via Meta-Learning.” Annual Meeting of the Association for Computational Linguistics (2020).
MLA
Click to copy
Park, Cheonbok, et al. “Unsupervised Neural Machine Translation for Low-Resource Domains via Meta-Learning.” Annual Meeting of the Association for Computational Linguistics, 2020.
BibTeX Click to copy
@article{cheonbok2020a,
title = {Unsupervised Neural Machine Translation for Low-Resource Domains via Meta-Learning},
year = {2020},
journal = {Annual Meeting of the Association for Computational Linguistics},
author = {Park, Cheonbok and Tae, Yunwon and Kim, Taehee and Yang, Soyoung and Khan, Mohammad Azam and Park, Lucy and Choo, J.}
}
Unsupervised machine translation, which utilizes unpaired monolingual corpora as training data, has achieved comparable performance against supervised machine translation. However, it still suffers from data-scarce domains. To address this issue, this paper presents a novel meta-learning algorithm for unsupervised neural machine translation (UNMT) that trains the model to adapt to another domain by utilizing only a small amount of training data. We assume that domain-general knowledge is a significant factor in handling data-scarce domains. Hence, we extend the meta-learning algorithm, which utilizes knowledge learned from high-resource domains, to boost the performance of low-resource UNMT. Our model surpasses a transfer learning-based approach by up to 2-3 BLEU scores. Extensive experimental results show that our proposed algorithm is pertinent for fast adaptation and consistently outperforms other baselines.