
Applications of BERT in sentimental analysis
- 1 Northeastern Univeristy
* Author to whom correspondence should be addressed.
Abstract
This research study emphasizes sentiment analysis and examines Natural Language Processing (NLP) by Bidirectional Encoder Representations from Transformers (BERT). BERT's bidirectional Transformer architecture pre-trained utilizes Next Sentence Prediction (NSP) and Masked Language Modeling (MLM) and has achieved a lot in terms of AI transformation. This paper provides a description of the BERT design, pre-training methods, and fine-tuning for sentiment analysis tasks. The study goes ahead and compares BERT's performance with other deep learning models, machine learning algorithms, and traditional rule-based techniques, highlighting the latter's limited ability to handle linguistic nuances and context. Additionally, studies proving the consistency and accuracy of BERT's sentiment analysis are examined, along with the challenges of handling irony, sarcasm, and domain-specific data. Ethical and privacy concerns that sentiment analysis inherently raises and makes recommendations for further research are also examined in the study, which also shows how integrating sentiment analysis with other domains can lead to multidisciplinary breakthroughs that can offer more comprehensive insights and applications.
Keywords
BERT, Sentiment Analysis, Natural Language Processing, Deep Learning, Comparative Analysis
[1]. J. Devlin, M. W. Chang, K. Lee, and K. Toutanova. (2018) Bert: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805.
[2]. J. Howard and S. Ruder. (2018) Universal language model fine-tuning for text classification, arXiv preprint arXiv:1801.06146.
[3]. Y. Zhang and Q. Yang. (2020) A survey on multi-task learning. IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 12, pp. 5586-5609.
[4]. S. Alaparthi and M. Mishra. (2021) BERT: A sentiment analysis odyssey. [J] Journal of Marketing Analytics, vol. 9, no. 2, pp. 118-126.
[5]. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. (2017) Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS 2017), pp. 6000-6010.
[6]. A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. (2018) Improving language understanding by generative pre-training, OpenAI, 2018.
[7]. K. P. Gunasekaran. (2023) Exploring sentiment analysis techniques in natural language processing: A comprehensive review, arXiv preprint arXiv:2305.14842, 2023.
[8]. Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le. (2019) XLNet: Generalized autoregressive pretraining for language understanding, Advances in Neural Information Processing Systems, vol. 32.
[9]. Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., Zettlemoyer, L. (2018). Deep contextualized word representations. arXiv preprint arXiv:1802.05365. https://arxiv.org/pdf/1802.05365
[10]. C. Sun, L. Huang, and X. Qiu. (2019) Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence, arXiv preprint arXiv:1903.09588, 2019.
Cite this article
Su,Z. (2024). Applications of BERT in sentimental analysis. Applied and Computational Engineering,92,147-152.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 6th International Conference on Computing and Data Science
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).