Call for Chapters: Bridging the Gap Between Human Factors and Explainable Artificial Intelligence


Abdul Rehman Gilal, Universiti Teknologi PETRONAS, Malaysia
Jafreezal Jaafar, Universiti Teknologi PETRONAS, Malaysia
shuib basri, Universiti Teknologi PETRONAS, Malaysia
Rehan Akbar, Universiti Teknologi PETRONAS, Malaysia

Call for Chapters

Proposals Submission Deadline: July 20, 2022
Full Chapters Due: October 2, 2022
Submission Date: October 2, 2022


Artificial Intelligence systems are being effectively used in several application domains. These systems produce results with high accuracies. Due to big and heterogeneous data, these systems often produce complex results. It is nearly impossible for human to understand these results. This is known as a black-box problem. This problem goes against the regulations of verifiability, accountability, and transparency of the system. For example, as per General Data Protection Regulation regulation, stakeholder has rights to know the explanation of the system decision. Explaining complex trained models (i.e., deep learning models) is nearly impossible for humans. Therefore, researchers use Explainable Artificial Intelligent (XAI) models to explain the decisions made by the AI systems. Explanations prepared by the XAI models are neither standardized nor assessed systematically. Most of the time, human factors (i.e., knowledge, background, race, gender, age) are not considered when the explanation models are being trained. In fact, we cannot generalize the explanations of XAI models to all humans. Human is a complex machine. Therefore, one must understand human factors in XAI to produce explanations. Therefore, we believe that this book will fill the gaps between human factors and XAI.


Artificial Intelligence (AI) has become a necessary part of the modern world. Explainable AI (XAI), a sub field of AI, will always exist to help people to understand complex AI models. Incomplete explanations produced by XAI will create social, economical or environmental problems. Therefore, it is very important to know that what makes “an explanation complete for the AI system user”. Only this way we can make decision makers to make right decisions. We cannot simply make explanations until we understand “Human” who are making them or using them. Therefore, the main objective of this book is to produce the contents which are really creating bridge between Human factors and AI or XAI. This book will bring latest contents, information, issues and challenges at one place for teachers, researchers and students. This may help the readers to understand and evaluate the latest challenges before designing, developing or evaluating the AI or XAI systems. By this way, AI industry can serve the community in a better way.

Target Audience

1. Professors and Lecturers 2. Undergraduate students 3. Postgraduate students 4. Researchers 5. Data Analysts 6. AI experts 7. Decision makers 8. Modern Psychologists 9. System Developers 10. Business Analysts

Recommended Topics

1. Introduction to Explainable Artificial Intelligence (XAI) and Human Factors 2. Impact of incomplete Explanation on the Society 3. What makes an Explanation Complete?: A psychological perspective 4. Completeness of Explainable Artificial Intelligence (XAI) systems 5. Human in the loop of XAI systems 6. Human-centered Explanations 7. Human-centered XAI Designs 8. Human-centered XAI modeling 9. Human-centered XAI Evaluation 10. Benefits of Human-centered Explanations 11. Future of XAI

Submission Procedure

Researchers and practitioners are invited to submit on or before July 20, 2022, a chapter proposal of 1,000 to 2,000 words clearly explaining the mission and concerns of his or her proposed chapter. Authors will be notified by August 3, 2022 about the status of their proposals and sent chapter guidelines.Full chapters are expected to be submitted by October 2, 2022, and all interested authors must consult the guidelines for manuscript submissions at prior to submission. All submitted chapters will be reviewed on a double-blind review basis. Contributors may also be requested to serve as reviewers for this project.

Note: There are no submission or acceptance fees for manuscripts submitted to this book publication, Bridging the Gap Between Human Factors and Explainable Artificial Intelligence. All manuscripts are accepted based on a double-blind peer review editorial process.

All proposals should be submitted through the eEditorial Discovery® online submission manager.


This book is scheduled to be published by IGI Global (formerly Idea Group Inc.), an international academic publisher of the "Information Science Reference" (formerly Idea Group Reference), "Medical Information Science Reference," "Business Science Reference," and "Engineering Science Reference" imprints. IGI Global specializes in publishing reference books, scholarly journals, and electronic databases featuring academic research on a variety of innovative topic areas including, but not limited to, education, social science, medicine and healthcare, business and management, information science and technology, engineering, public administration, library and information science, media and communication studies, and environmental science. For additional information regarding the publisher, please visit This publication is anticipated to be released in 2023.

Important Dates

July 20, 2022: Proposal Submission Deadline
August 3, 2022: Notification of Acceptance
October 2, 2022: Full Chapter Submission
November 15, 2022: Review Results Returned
December 27, 2022: Final Acceptance Notification
January 10, 2023: Final Chapter Submission


Abdul Rehman Gilal
Universiti Teknologi PETRONAS


Science and Engineering; Social Sciences and Humanities; Security and Forensics; Library and Information Science; Education; Computer Science and Information Technology; Business and Management
Back to Call for Papers List