Google Translate Versus Matecat for Religious Text Translation

A Study of Iranian Students' Speed, Accuracy, and Perceptions

Authors

  • Abolfazl Khorasanizadeh Gazki 📧 M.A. in Translation Studies, Department of English Language and Literature, Faculty of Foreign Languages, University of Isfahan, Isfahan, Iran
  • Dariush Nejad Ansari Mahabadi Associate Professor, Department of English Language and Literature, Faculty of Foreign Languages, University of Isfahan, Isfahan, Iran

Abstract

Technological advancement has led to the advent of numerous Machine Translation system and Computed-Assisted Translation (CAT) tools. This study compared the effectiveness of Google Translate as an MT system with Matecat, a CAT tool. It examined their impact on translation quality, speed, and user feedback on both systems. The research involved two classes at the Islamic Azad University of Qom, with 16 students assigned to the Matecat group and 11 to the Google Translate group. All participants first translated a 250-word religious text using dictionaries and completed a placement test showing they shared an intermediate English proficiency level. Following instructions, participants used their assigned system to translate the same text for the post-test. The research team assessed translation quality using Waddington's model. Dependent t-tests showed that while Google Translate significantly reduced translation time without improving quality, Matecat achieved faster and better quality than human translation. Independent t-tests found no significant differences between the systems regarding translation accuracy and speed. Students responded positively to both systems, noting their user-friendly interfaces and accurate religious terminology and grammar handling. They expressed satisfaction with both tools and indicated they would continue using them.

Keywords:

Google Translate, Machine Translation, Machine Translation Output, Matecat, Translation Quality Assessment

References

Doherty, S., & Kenny, D. (2014). The design and evaluation of a statistical machine translation syllabus for translation students. The Interpreter and Translator Trainer, 8(2), 295–315.

Laubli, S., Amrhein, C., Düggelin, P., Gonzalez, B., Zwahlen, A., & Volk, M. (2019). Post-editing productivity with neural machine translation: An empirical assessment of speed and quality in the banking and finance domain. In Proceedings of Machine Translation Summit XVII: Research Track (pp. 267–272). European Association for Machine Translation.

Macken, L., Prou, D., & Tezcan, A. (2020). Quantifying the effect of machine translation in a high-quality human translation production process. Informatics 7 (2), 1–19.

Pal, P., Virkar, Y., Mathur, P., Chronopoulou, A., & Federico, M. (2023). Improving isochronous machine translation with target factors and auxiliary counters. Interspeech 2023, 37–41. https://doi.org/10.21437/Interspeech.2023-1063

Purwaningsih, D. (2016). Comparing translation produced by Google Translate tool to translation produced by translator. Journal of English Language Studies, 1(1).

Tasdemir, S., Lopez, E., Satar, M., & Riches, N. (2023). Teachers’ perceptions of machine translation as a pedagogical tool. The JALT CALL Journal, 19(1), 92–112.

Xu, H. (2024). Assessment of computer-aided translation quality based on large-scale corpora. International Journal of e-Collaboration (IJeC), 20(1), 1–14.

Downloads

Published

2025-03-07

How to Cite

Khorasanizadeh Gazki, A., & Nejad Ansari Mahabadi, D. (2025). Google Translate Versus Matecat for Religious Text Translation: A Study of Iranian Students’ Speed, Accuracy, and Perceptions. Iranian Journal of Translation Studies, 22(88). Retrieved from https://journal.translationstudies.ir/ts/article/view/1217

DOR