DocumentCode
2950432
Title
Evaluating English to Arabic machine translators
Author
Hailat, Taghreed ; Al-Kabi, Mohammed N. ; Alsmadi, Izzat M. ; Al-Shawakfa, Emad
fYear
2013
fDate
3-5 Dec. 2013
Firstpage
1
Lastpage
6
Abstract
Location and language have now less impact as barriers for the expansion and the spread of information around the world. Machine translators achieve such a tedious task of translation among languages in quick and reliable manners. However, if compared with human translation, issues related to semantic meanings may always arise. Different machine translators may differ in their effectiveness, and they can be evaluated either by humans or through the use of automatic methods. In this study, we attempt to evaluate the effectiveness of two popular Machine Translation (MT) systems (Google Translate and Babylon machine translation systems) to translate sentences from English to Arabic, where an automatic evaluation method called Bilingual Evaluation Understudy (BLEU) is used. Our preliminary tests indicated that Google Translate system is more effective in translating English sentences into Arabic in comparison with the Babylon MT system.
Keywords
language translation; natural language processing; BLEU; Babylon MT system; Babylon machine translation systems; Bilingual Evaluation Understudy; English-to-Arabic machine translator evaluation; Google Translate MT system; automatic evaluation method; semantic meanings; sentence translation; Computational linguistics; Computers; Conferences; Electrical engineering; Google; Internet; Measurement; Arabic MT; Automatic Evaluation of Machine Translation; BLEU; English MT; statistical MT;
fLanguage
English
Publisher
ieee
Conference_Titel
Applied Electrical Engineering and Computing Technologies (AEECT), 2013 IEEE Jordan Conference on
Conference_Location
Amman
Print_ISBN
978-1-4799-2305-2
Type
conf
DOI
10.1109/AEECT.2013.6716439
Filename
6716439
Link To Document