Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.13091/1011
Title: Syntax-ignorant N-gram embeddings for dialectal Arabic sentiment analysis
Authors: Mulki, Hala
Haddad, Hatem
Gridach, Mourad
Babaoglu, İsmail
Keywords: n-gram embeddings
Unordered compositionality
Arabic dialects
Sentiment analysis
Issue Date: 2021
Publisher: CAMBRIDGE UNIV PRESS
Abstract: Arabic sentiment analysis models have recently employed compositional paragraph or sentence embedding features to represent the informal Arabic dialectal content. These embeddings are mostly composed via ordered, syntax-aware composition functions and learned within deep neural network architectures. With the differences in the syntactic structure and words' order among the Arabic dialects, a sentiment analysis system developed for one dialect might not be efficient for the others. Here we present syntax-ignorant, sentiment-specific n-gram embeddings for sentiment analysis of several Arabic dialects. The novelty of the proposed model is illustrated through its features and architecture. In the proposed model, the sentiment is expressed by embeddings, composed via the unordered additive composition function and learned within a shallow neural architecture. To evaluate the generated embeddings, they were compared with the state-of-the art word/paragraph embeddings. This involved investigating their efficiency, as expressive sentiment features, based on the visualisation maps constructed for our n-gram embeddings and word2vec/doc2vec. In addition, using several Eastern/Western Arabic datasets of single-dialect and multi-dialectal contents, the ability of our embeddings to recognise the sentiment was investigated against word/paragraph embeddings-based models. This comparison was performed within both shallow and deep neural network architectures and with two unordered composition functions employed. The results revealed that the introduced syntax-ignorant embeddings could represent single and combinations of different dialects efficiently, as our shallow sentiment analysis model, trained with the proposed n-gram embeddings, could outperform the word2vec/doc2vec models and rival deep neural architectures consuming, remarkably, less training time.
URI: https://doi.org/10.1017/S135132492000008X
https://hdl.handle.net/20.500.13091/1011
ISSN: 1351-3249
1469-8110
Appears in Collections:Mühendislik ve Doğa Bilimleri Fakültesi Koleksiyonu
Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collections
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collections

Show full item record

CORE Recommender

SCOPUSTM   
Citations

1
checked on Feb 4, 2023

Page view(s)

178
checked on Feb 6, 2023

Google ScholarTM

Check

Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.