Title of article :
A Transformer-based Approach for Persian Text Chunking
Author/Authors :
Kavehzadeh ، Parsa Computer Engineering Department - Amirkabir University of Technology , Abdollah Pour ، Mohammad Mahdi Computer Engineering Department - Amirkabir University of Technology , Momtazi ، Saeedeh Computer Engineering Department - Amirkabir University of Technology
Abstract :
Over the last few years, text chunking has taken a significant part in the sequence labeling tasks. Although a large variety of methods have been proposed for shallow parsing in English, most of the proposed approaches for text chunking in the Persian language are based on the simple and traditional concepts. In this paper, we propose using the state-of-the-art transformer-based contextualized models, namely BERT and XLM-RoBERTa, as the major structure of our models. Conditional random field (CRF), a combination of bidirectional long short-term memory (BiLSTM) and CRF, and a simple dense layer are employed after the transformer-based models in order to enhance the model s performance in predicting the chunk labels. Moreover, we provide a new dataset for noun phrase chunking in Persian, which includes the annotated data of Persian news text. Our experiments reveal that XLM-RoBERTa achieves the best performance between all the architectures tried on the proposed dataset. The obtained results also show that using a single CRF layer would yield better results than a dense layer, and even the combination of BiLSTM and CRF.
Keywords :
Persian text chunking , sequence labeling , deep learning , contextualized word representation
Journal title :
Journal of Artificial Intelligence and Data Mining
Journal title :
Journal of Artificial Intelligence and Data Mining