Analyzing Linguistic Features for Classifying Why-Type Non-Factoid Questions

Analyzing Linguistic Features for Classifying Why-Type Non-Factoid Questions

Manvi Breja, Sanjay Kumar Jain
DOI: 10.4018/IJITWE.2021070102
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Why-type non-factoid questions are complex and difficult to answer compared to factoid questions. A challenge in finding an accurate answer to a non-factoid question is to understand the intent of user as it differs with their knowledge and also the context of the question in which it is being asked. Predicting correct type of a question and its answer by a classification model is an important issue as it affects the subsequent processing of its answer. In this paper, a classification model is proposed which is trained by a combination of lexical, syntactic, and semantic features to classify open-domain why-type questions. Various supervised classifiers are trained on a featured dataset out of which support vector machine achieves the highest accuracy of 81% in determining question type and 76.8% in determining answer type which shows 14.6% improvement in predicting an answer type over a baseline why-type classifier with 62.2% accuracy.
Article Preview
Top

Introduction

Question Answering Systems (QASs) answer a users’ natural language question by employing the integrated techniques of Natural Language Processing (NLP) and Information Retrieval (IR). In English, questions are categorized as (1) Factoid Questions which are unambiguous and directly answered in a single phrase or sentence. They usually begin with ‘what’, ‘when’, ‘where’, ‘who’ and ‘which’. (2) Non-Factoid Questions are ambiguous and seek varied explanations. They generally begin with ‘why’ and ‘how’. Question analysis is one of the major components of QAS which performs three major tasks as (1) Question Pre-Processing which pre-processes a question and extracts important keywords, (2) Question Classification which analyzes a question by gauging its type and expected answer type and (3) Question Reformulation which reformulates a question to extract relevant answer documents.

Question classification plays a vital role in attaining the accuracy of QAS as depicted from the research in (Mohasseb et al., 2018; Xu et al., 2016). The question type helps understanding the pivot of question by identifying its main entities whereas answer type determines the user’s need from question. The appropriate question classification is exigent for non-factoid QAS as there are diverse ambiguous answers possible for its question types.

There are many researches such as (Sarrouti & Alaoui, 2020; Dodiya & Jain, 2016; Biswas et al., 2014) based on a standard UIUC taxonomy by Li and Roth (2002) for general QA covering all types of questions where Why-type questions belong to descriptive or summary type and are expected to contain ‘reason’ answer type. Madabushi and Lee (2016) has made changes in Li and Roth (2002) taxonomy by including extraterrestrial entities to classify human made entities thus achieving 97.2% accuracy. Hoth (2013) investigates behavioural-analytic perspective to answer different why-questions classified as Immediate Antecedent (requires reasoning of something happening), Disposition or summary label (requires reasoning by comparing with other entities), internal mediating mechanism (concerning behavioural changes occurring) and External historical variables (requires explanation by production history). Koeman et al. (2019) have analyzed why-questions in robotics as ‘Why did you do that’ and ‘Why didn’t you do that’. Verberne et al. (2008) have classified open-domain why-questions as Action, Process, Intensive complementation, monotransitive have, existential there and declarative layer questions by formulating syntactic patterns when parsing question by TOSCA parser. The authors further sub-classified reason answer type as cause, motivation, circumstance and purpose. Syntactic rules are formulated for each question and answer type that are able to assign correct answer type to 62.2% questions.

The work by Verberne (2006) motivates us to propose a classification of generic-domain why-questions irrespective of any domain. Since there is no common syntactic pattern possible for why-questions and appropriate answering differs with the user’s need, the paper utilizes a combination of lexical, syntactic and semantic features of questions to predict its type and expected answer type and identifies the focus of question and its need from the answer.

The paper is contributed towards developing a parser that automatically categorizes an input Why-question into its categories, further analysing different features of questions and evaluating performance of supervised classifiers; viz. Logistic Regression, Support Vector Machines, Decision Tree, Random Forest, XGBoost and neural network in determining question and answer type. The performance is further improved by hyperparameter tuning of classifiers (Fried et al., 2015), feature selection and scoring by computing feature importance (McElvain et al., 2019) and incorporating more advanced features for classifying questions.

Complete Article List

Search this Journal:
Reset
Volume 19: 1 Issue (2024)
Volume 18: 1 Issue (2023)
Volume 17: 4 Issues (2022): 1 Released, 3 Forthcoming
Volume 16: 4 Issues (2021)
Volume 15: 4 Issues (2020)
Volume 14: 4 Issues (2019)
Volume 13: 4 Issues (2018)
Volume 12: 4 Issues (2017)
Volume 11: 4 Issues (2016)
Volume 10: 4 Issues (2015)
Volume 9: 4 Issues (2014)
Volume 8: 4 Issues (2013)
Volume 7: 4 Issues (2012)
Volume 6: 4 Issues (2011)
Volume 5: 4 Issues (2010)
Volume 4: 4 Issues (2009)
Volume 3: 4 Issues (2008)
Volume 2: 4 Issues (2007)
Volume 1: 4 Issues (2006)
View Complete Journal Contents Listing