Deep Learning Approaches for Affective Computing in Text

Deep Learning Approaches for Affective Computing in Text

ISBN13: 9798369305027|ISBN13 Softcover: 9798369305034|EISBN13: 9798369305041
DOI: 10.4018/979-8-3693-0502-7.ch015
Cite Chapter Cite Chapter

MLA

Cabada, Ramón Zatarain, et al. "Deep Learning Approaches for Affective Computing in Text." Advanced Applications of Generative AI and Natural Language Processing Models, edited by Ahmed J. Obaid, et al., IGI Global, 2024, pp. 306-339. https://doi.org/10.4018/979-8-3693-0502-7.ch015

APA

Cabada, R. Z., Estrada, M. L., & Bátiz Beltrán, V. M. (2024). Deep Learning Approaches for Affective Computing in Text. In A. Obaid, B. Bhushan, M. S., & S. Rajest (Eds.), Advanced Applications of Generative AI and Natural Language Processing Models (pp. 306-339). IGI Global. https://doi.org/10.4018/979-8-3693-0502-7.ch015

Chicago

Cabada, Ramón Zatarain, María Lucía Barrón Estrada, and Víctor Manuel Bátiz Beltrán. "Deep Learning Approaches for Affective Computing in Text." In Advanced Applications of Generative AI and Natural Language Processing Models, edited by Ahmed J. Obaid, et al., 306-339. Hershey, PA: IGI Global, 2024. https://doi.org/10.4018/979-8-3693-0502-7.ch015

Export Reference

Mendeley
Favorite

Abstract

The field of natural language processing (NLP) is one of the first to be addressed since artificial intelligence emerged. NLP has made remarkable advances in recent years thanks to the development of new machine learning techniques, particularly novel deep learning methods such as LSTM networks and transformers. This chapter presents an overview of how deep learning techniques have been applied to NLP in the area of affective computing. The chapter examines traditional and novel deep learning architectures developed for natural language processing (NLP) tasks. These architectures comprise recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and the cutting-edge transformers. Moreover, a methodology for NLP method training and fine-tuning is presented. The chapter also integrates Python code that demonstrates two NLP case studies specializing in the educational domain for text classification and sentiment analysis. In both cases, the transformer-based machine learning model (BERT) produced the best results.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.