Baseline Transliteration Corpus for Improved English-Amharic Machine Translation

Authors

  • Yohannes Biadgligne Sudan University of Science and Technology (SUST) and Bahir Dar Institute of Technology (BIT)
  • Kamel Smaili Loria - Universit´e Lorraine, France

DOI:

https://doi.org/10.31449/inf.v47i6.4395

Abstract

Machine translation (MT) between English and Amharic is one of the least studied
and, performance-wise, least successful topics in the MT field. We therefore propose
to apply corpus transliteration and augmentation techniques in this study to address
this issue and improve MT performance for the language pairs. This paper presents
the creation, the augmentation, and the use of an Amharic to English transliteration
corpus for NMT experiments. The created corpus has a total of 450,608 parallel
sentences before preprocessing and is used to train three different NMT architectures
after preprocessing. These models are actually built using Recurrent Neural Networks
with attention mechanism (RNN), Gated Recurrent Units (GRUs), and Transformers.
Specifically, for Transformer-based experiments, three different Transformer models
with different hyperparameters are created. Compared to previous works, the BLEU
score results of all NMT models used in this study are improved. One of the three
Transformer models, in particular, achieves the highest BLEU score ever recorded for
the language pairs.

Downloads

Published

2023-06-15

How to Cite

Baseline Transliteration Corpus for Improved English-Amharic Machine Translation. (2023). Informatica, 47(6). https://doi.org/10.31449/inf.v47i6.4395