Using Rules in the Narrative Knowledge Representation Language (NKRL) Environment

Using Rules in the Narrative Knowledge Representation Language (NKRL) Environment

Gian Piero Zarri
DOI: 10.4018/978-1-60566-402-6.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

NKRL is a semantic language expressly designed to deal with all sort of ‘narratives’, in particular with those (‘non-fictional narratives’) of an economic interest. From a knowledge representation point of view, its main characteristics consists in the use of two different sorts of ontologies, a standard, binary ontology of concepts, and an ontology of n-ary templates, where each template corresponds to the formal representation of a class of elementary events. Rules in NKRL correspond to high-level reasoning paradigms like the search for causal relationships or the use of analogical reasoning. Given i) the conceptual complexity of these paradigms, and ii) the sophistication of the underlying representation language, rules in NKRL cannot be implemented in a (weak) ‘inference by inheritance’ style but must follow a powerful ‘inference by resolution’ approach. After a short reminder about these two inference styles, and a quick introduction of the NKRL language, the chapter describes in some depth the main characteristics of the NKRL inference rules.
Chapter Preview
Top

Introduction And Motivation

‘Narrative’ information concerns the account of some real-life or fictional story (a ‘narrative’) involving concrete or imaginary ‘personages’. In this paper, we will deal mainly with those (multimedia) non-fictional narratives that are typically embodied into corporate memory documents (memos, policy statements, reports, minutes, documentation archives for product development…), news stories, normative and legal texts, medical (or financial, cadastral, administrative…) records, many intelligence messages, surveillance videos or visitor logs, actuality photos and video fragments for newspapers and magazines, eLearning and Cultural Heritage material (text, image, video, sound…), plotting and narrative course of actions for videogames etc. From a ‘content’ point of view, these narrative concern in practice the description of spatially and temporally characterized ‘events’ that relate, at some level of abstraction, the behavior or the state of some real-life ‘actors’ (characters, personages, etc.): these try to attain a specific result, experience particular situations, manipulate some (concrete or abstract) materials, send or receive messages, buy, sell, deliver etc. Note that:

  • The term ‘event’ is taken here in its most general meaning, covering also strictly related notions like fact, action, state, situation, episode, activity etc., see (Zarri, 1998) in this context.

  • The ‘actors’ or ‘personages’ involved in the events are not necessarily human beings: we can have narratives concerning, e.g., the vicissitudes in the journey of a nuclear submarine (the ‘actor’, ‘subject’ or ‘personage’) or the various avatars in the life of a commercial product.

  • Even if a large amount of (non-fictional) narratives are embodied within natural language (NL) texts, this is not necessarily true: narrative information is really ‘multimedia’. A photo representing a situation that, verbalized, could be expressed as “The US President is addressing the Congress” is not of course an NL document, yet it surely represents a narrative.

In this paper, we will describe succinctly an Artificial Intelligence tool, NKRL, “Narrative Knowledge Representation Language”, see (Zarri, 1998; 2003; 2005) that is, at the same time:

  • A knowledge representation system for describing in some detail the essential content (the ‘meaning’) of complex (non-fictional) narratives;

  • A system of reasoning (inference) procedures that, thanks to the richness of the representation system, is able to automatically retrieve at least part of the implicit information buried in the original data;

  • An implemented software environment.

From a knowledge representation point of view, as it will appear clearly later, the complexity of the information to deal with implies the use of an advanced sort of representation, able to describe this information with a minimum loss of the original ‘meaning’. NKRL concerns then, fundamentally, an n-ary type of knowledge representation – see, e.g., (Zarri, 2007) – in contrast with, e.g., the simpler W3C languages (RDF(S), OWL) that are essentially ‘binary’ languages.

Key Terms in this Chapter

NKRL Inference Engine(s): Software modules that, following the ‘inference by resolution’ general paradigm and making use of (complex) chronological backtracking techniques, implement the different ‘reasoning steps’ included in the NKRL inference rules.

Integration of the NKRL Inference Rules: Concerns the possibility of using the transformation rules to automatically ‘transform’ the reasoning steps executed within a hypothesis context. A first consequence of this option concerns the possibility of executing a hypothesis deemed to fail until its term given that the transformations allows the inference engine to ‘adapt’ the reasoning steps originally included in a hypothesis to the real contents of the knowledge base. More in general, the possibility of transforming systematically all the reasoning steps of a hypothesis, even when this last are successful, allows us to uncover large portions of the ‘implicit information’ buried in the base.

NKRL: The Narrative Knowledge Representation Language, specifically implemented to deal with non-fictional narrative of an economic interest. ‘Classical’ ontologies are largely sufficient to provide a static, a priori definition of the concepts and of their properties. This is no more true when we consider the dynamic behavior of the concepts, i.e., we want to describe their mutual relationships when they take part in some concrete action, situation etc. (‘events’). NKRL deals with this problem by adding to the usual ontology of concept an ‘ontology of events’, a new sort of hierarchical organization where the nodes, called ‘templates’, represent general classes of events like “move a physical object”, “be present in a place”, “produce a service”, “send/receive a message”, etc.

NKRL Inference Rules, Hypotheses: They are used to build up automatically ‘reasonable’ connections among the information stored in an NKRL knowledge base according to a number of pre-defined reasoning schemata, e.g., ‘causal’ schemata’.

Narrative Information: Concerns in general the account of some real-life or fictional story (a ‘narrative’) involving concrete or imaginary ‘personages’. In the case of non-fictional narratives of an economic interest, the personages are ‘real characters’, and the narrative happens in the real world. Moreover, the narratives are now embodied in multimedia documents of a specific economic interest: corporate memory documents, news stories, normative and legal texts, medical records, intelligence messages, surveillance videos or visitor logs, etc.

NKRL Inference Rules, Transformations: These rules try to ‘adapt’, from a semantic point of view, a query that failed to the contents of the existing knowledge bases. The principle employed consists in using rules to automatically ‘transform’ the original query into one or more different queries that are not strictly ‘equivalent’ but only ‘semantically close’ to the original one.

‘Inference by resolution’ vs. ‘inference by inheritance’: Two orthogonal ways of implementing ‘reasoning’ in the knowledge-based systems. In the first way – developed originally in a automatic theorem-proving context – the result of the inference operations is the deduction of new facts (new knowledge) from the existing information. It is, then, the privileged way of setting up ‘rules’. In the second, weaker way – used, e.g., as ‘native’ reasoning tool in the so-called W3C languages (RDF(S), OWL) – no new knowledge is produced from the pre-existing one, and the reasoning techniques are only used to solve some classification problem when setting up well-formed ontologies. Typical tasks in this context are, e.g., i) checking the consistency of classes/concepts (i.e., determining whether a class can have any instances), and ii) calculating the subsumption hierarchy (i.e., arranging the classes according to their generic/specific relationships).

Complete Chapter List

Search this Book:
Reset