Reference Hub3
Explainable Deep Reinforcement Learning for Knowledge Graph Reasoning

Explainable Deep Reinforcement Learning for Knowledge Graph Reasoning

Copyright: © 2023 |Pages: 16
ISBN13: 9781668491898|ISBN10: 1668491893|ISBN13 Softcover: 9781668491904|EISBN13: 9781668491911
DOI: 10.4018/978-1-6684-9189-8.ch012
Cite Chapter Cite Chapter

MLA

Wang, Di. "Explainable Deep Reinforcement Learning for Knowledge Graph Reasoning." Recent Developments in Machine and Human Intelligence, edited by S. Suman Rajest, et al., IGI Global, 2023, pp. 168-183. https://doi.org/10.4018/978-1-6684-9189-8.ch012

APA

Wang, D. (2023). Explainable Deep Reinforcement Learning for Knowledge Graph Reasoning. In S. Rajest, B. Singh, A. J. Obaid, R. Regin, & K. Chinnusamy (Eds.), Recent Developments in Machine and Human Intelligence (pp. 168-183). IGI Global. https://doi.org/10.4018/978-1-6684-9189-8.ch012

Chicago

Wang, Di. "Explainable Deep Reinforcement Learning for Knowledge Graph Reasoning." In Recent Developments in Machine and Human Intelligence, edited by S. Suman Rajest, et al., 168-183. Hershey, PA: IGI Global, 2023. https://doi.org/10.4018/978-1-6684-9189-8.ch012

Export Reference

Mendeley
Favorite

Abstract

Artificial intelligence faces a considerable challenge in automated reasoning, particularly in inferring missing data from existing observations. Knowledge graph (KG) reasoning can significantly enhance the performance of context-aware AI systems such as GPT. Deep reinforcement learning (DRL), an influential framework for sequential decision-making, exhibits strength in managing uncertain and dynamic environments. Definitions of state space, action space, and reward function in DRL directly dictate the performances. This chapter provides an overview of the pipeline and advantages of leveraging DRL for knowledge graph reasoning. It delves deep into the challenges of KG reasoning and features of existing studies. This chapter offers a comparative study of widely used state spaces, action spaces, reward functions, and neural networks. Furthermore, it evaluates the pros and cons of DRL-based methodologies and compares the performances of nine benchmark models across six unique datasets and four evaluation metrics.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.