Variational Unsupervised Machine Translation
Lukas Edman, Jennifer Spenader and Antonio Toral


Neural models for machine translation (NMT) have recently achieved performance close to that of human translators (Hassan et al., 2018) but require large amounts of parallel data, which is scarce, if available at all, for the vast majority of human languages. In this work, we assume only the availability of monolingual data in both the source and target languages, and use a neural model to translate in an unsupervised fashion. Following recent research in this area (Artetxe et al., 2018; Lample et al., 2018), we apply the effective combination of denoising and backtranslation for training the model. Our main contribution is the application of variational encoding to unsupervised NMT, a technique borrowed from computer vision (Kingma & Welling, 2013), to account for the high variability in human translation.