Username   Password       Forgot your password?  Forgot your username? 

 

Dual-Channel Attention Model for Text Sentiment Analysis

Volume 15, Number 3, March 2019, pp. 834-841
DOI: 10.23940/ijpe.19.03.p12.834841

Hui Li, Yuanyuan Zheng, and Pengju Ren

School of Physics and Electronic Information, Henan Polytechnic University, Jiaozuo, 454000, China

(Submitted on October 20, 2018; Revised on November 21, 2018; Accepted on December 25, 2018)

Abstract:

Focused on the issue that text information cannot be fully extracted by the single-channel neural network model, the Dual-Channel Attention Model (DCAM) is proposed for text sentiment analysis. Firstly, text is represented in the form of a matrix using a word vector trained by Word2Vec. Secondly, the matrix is used as input data and sent to Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) networks for feature extraction. Thirdly, an attention model is introduced to extract important feature information. Finally, the text features are merged, and the classification layer is used to classify the sentiment. The model is evaluated on a Chinese corpus. According to the experimental results, the accuracy of the proposed model can reach 92.7%, which is obviously superior to other single-channel neural network models.

 

References: 17

        1. F. T. I. I. Retrieval, “Opinion Mining and Sentiment Analysis,” Foundations & Trends in Information Retrieval, Vol. 2, pp. 1-135, 2008
        2. M. Taboada, J. Brooke, M. Tofiloski, K. Voll, and M. Stede, “Lexicon-based Methods for Sentiment Analysis,” Computational Linguistics, Vol. 37, pp. 267-307, 2011
        3. S. Liu and W. Deng, “Very Deep Convolutional Neural Network based Image Classification using Small Training Sample Size,” in Proceedings of the 3rd IAPR Asian Conference on Pattern Recognition (ACPR), pp. 730-734, 2016
        4. A. Graves, A. R. Mohamed, and G. Hinton, “Speech Recognition with Deep Recurrent Neural Networks,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Vol. 38, pp. 6645-6649, 2013
        5. R. Yin, P. Li, and B. Wang, “Sentiment Lexical-Augmented Convolutional Neural Networks for Sentiment Analysis,” in Proceedings of IEEE Second International Conference on Data Science in Cyberspace, pp. 630-635, 2017
        6. Y. Zhang, S. Roller, and B. Wallace, “MGNC-CNN: A Simple Approach to Exploiting Multiple Word Embeddings for Sentence Classification,” in Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2016
        7. N. Kalchbrenner, E. Grefenstette, and P. Blunsom, “A Convolutional Neural Network for Modelling Sentences,” Eprint Arxiv, Vol. 1, 2014
        8. Y. Zhang, Y. Jiang, and Y. Tong, “Study of Sentiment Classification for Chinese Microblog based on Recurrent Neural Network,” Chinese Journal of Electronics, Vol. 25, pp. 601-607, 2016
        9. A. Hassan and A. Mahmood, “Efficient Deep Learning Model for Text Classification based on Recurrent and Convolutional Layers,” in Proceedings of IEEE International Conference on Machine Learning and Applications, 2018
        10. A. Hassan and A. Mahmood, “Convolutional Recurrent Deep Learning Model for Sentence Classification,” IEEE Access, Vol. 6, pp. 13949-13957, 2018
        11. K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, et al., “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention,” Computer Science, pp. 2048-2057, 2015
        12. D. Bahdanau, K. Cho, and Y. Bengio, “Neural Machine Translation by Jointly Learning to Align and Translate,” Computer Science, 2014
        13. W. Yin, H. Schütze, B. Xiang, and B. Zhou, “ABCNN: Attention-based Convolutional Neural Network for Modeling Sentence Pairs,” Computer Science, 2015
        14. Q. H. Vo, H. T. Nguyen, B. Le, and M. L. Nguyen, “Multi-Channel LSTM-CNN Model for Vietnamese Sentiment Analysis,” in Proceedings of International Conference on Knowledge and Systems Engineering, pp. 24-29, 2017
        15. M. T. Luong, H. Pham, and C. D. Manning, “Effective Approaches to Attention-based Neural Machine Translation,” Computer Science, pp. 1412-1421, 2015
        16. Y. Bengio and O. Delalleau, “On the Expressive Power of Deep Architectures,” in Proceedings of International Conference on Algorithmic Learning Theory, pp. 18-36, 2011
        17. T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient Estimation of Word Representations in Vector Space,” Computer Science, 2013

         

        Please note : You will need Adobe Acrobat viewer to view the full articles.Get Free Adobe Reader

         
        This site uses encryption for transmitting your passwords. ratmilwebsolutions.com