Article Preview
TopIntroduction
Question Answering Systems (QASs) answer a users’ natural language question by employing the integrated techniques of Natural Language Processing (NLP) and Information Retrieval (IR). In English, questions are categorized as (1) Factoid Questions which are unambiguous and directly answered in a single phrase or sentence. They usually begin with ‘what’, ‘when’, ‘where’, ‘who’ and ‘which’. (2) Non-Factoid Questions are ambiguous and seek varied explanations. They generally begin with ‘why’ and ‘how’. Question analysis is one of the major components of QAS which performs three major tasks as (1) Question Pre-Processing which pre-processes a question and extracts important keywords, (2) Question Classification which analyzes a question by gauging its type and expected answer type and (3) Question Reformulation which reformulates a question to extract relevant answer documents.
Question classification plays a vital role in attaining the accuracy of QAS as depicted from the research in (Mohasseb et al., 2018; Xu et al., 2016). The question type helps understanding the pivot of question by identifying its main entities whereas answer type determines the user’s need from question. The appropriate question classification is exigent for non-factoid QAS as there are diverse ambiguous answers possible for its question types.
There are many researches such as (Sarrouti & Alaoui, 2020; Dodiya & Jain, 2016; Biswas et al., 2014) based on a standard UIUC taxonomy by Li and Roth (2002) for general QA covering all types of questions where Why-type questions belong to descriptive or summary type and are expected to contain ‘reason’ answer type. Madabushi and Lee (2016) has made changes in Li and Roth (2002) taxonomy by including extraterrestrial entities to classify human made entities thus achieving 97.2% accuracy. Hoth (2013) investigates behavioural-analytic perspective to answer different why-questions classified as Immediate Antecedent (requires reasoning of something happening), Disposition or summary label (requires reasoning by comparing with other entities), internal mediating mechanism (concerning behavioural changes occurring) and External historical variables (requires explanation by production history). Koeman et al. (2019) have analyzed why-questions in robotics as ‘Why did you do that’ and ‘Why didn’t you do that’. Verberne et al. (2008) have classified open-domain why-questions as Action, Process, Intensive complementation, monotransitive have, existential there and declarative layer questions by formulating syntactic patterns when parsing question by TOSCA parser. The authors further sub-classified reason answer type as cause, motivation, circumstance and purpose. Syntactic rules are formulated for each question and answer type that are able to assign correct answer type to 62.2% questions.
The work by Verberne (2006) motivates us to propose a classification of generic-domain why-questions irrespective of any domain. Since there is no common syntactic pattern possible for why-questions and appropriate answering differs with the user’s need, the paper utilizes a combination of lexical, syntactic and semantic features of questions to predict its type and expected answer type and identifies the focus of question and its need from the answer.
The paper is contributed towards developing a parser that automatically categorizes an input Why-question into its categories, further analysing different features of questions and evaluating performance of supervised classifiers; viz. Logistic Regression, Support Vector Machines, Decision Tree, Random Forest, XGBoost and neural network in determining question and answer type. The performance is further improved by hyperparameter tuning of classifiers (Fried et al., 2015), feature selection and scoring by computing feature importance (McElvain et al., 2019) and incorporating more advanced features for classifying questions.