Search the World's Largest Database of Information Science & Technology Terms & Definitions
InfInfoScipedia LogoScipedia
A Free Service of IGI Global Publishing House
Below please find a list of definitions for the term that
you selected from multiple scholarly research resources.

What is Knowledge Normalization

Handbook of Research on Learning Design and Learning Objects: Issues, Applications, and Technologies
Aims to ease manual or automatic knowledge comparison and retrieval by reducing the number of incomparable ways information is or can be written and by improving the way objects are (re-)presented and connected. Lexical normalization involves following object naming rules such as “use English singular nouns or nominal expressions” and “follow the undescore-based style instead of the Intercap style.” Structural and ontological normalization involves following rules such as “when introducing an object into an ontology, relate it to all its already represented direct generalizations, specializations, components, and containers,” “use subtypeOf relations instead of or in addition to instanceOf relations when both cases are possible,” “avoid the use of non binary relations” and “do not represent processes via relations.” These last example rules lead to the introduction of the concept type “sitting_down” instead of the relation types “sits,” “sitsOn” and “sits_on_atPointInTime” which are incomparable. Thus, the sentence “some animal sits above some artifact” can be represented in the following explicit form in the Formalized-English notation: “some animal is agent of a sitting_down above some artefact” (this sentence uses the very common basic relations “agent” and “above”). As this example illustrates, knowledge normalization means reducing redundancies as well as increasing the precision and scalability of knowledge modelling. Scalable knowledge modelling and sharing approaches maintain the possibility of efficiently and correctly finding and/or inserting a piece of information even when the KB becomes very large. Scalability implies the exploitation of automatic procedures for (i) discovering consistencies and redundancies during knowledge updates, and (ii) filtering knowledge according to various criteria during searches.
Published in Chapter:
For the Ultimate Accessibility and Reusability
Philippe Martin (Griffith University, Australia) and Michel Eboueya (University of La Rochelle, France)
DOI: 10.4018/978-1-59904-861-1.ch029
Abstract
This chapter first argues that current approaches for sharing and retrieving learning objects or any other kinds of information are not efficient or scalable, essentially because almost all of these approaches are based on the manual or automatic indexation or merge of independently created formal or informal resources. It then shows that tightly interconnected collaboratively updated formal or semiformal large knowledge bases (semantic networks) can, should, and probably will, be used as a shared medium for the tasks of researching, publishing, teaching, learning, evaluating, or collaborating, and thus ease or complement traditional methods such as face-to-face teaching and document publishing. To test and support these claims, the authors have implemented their ideas into a knowledge server named WebKB- 2 and begun representing their research domain and several courses at their universities. The same underlying techniques could be applied to a semantic/learning grid or peer-to-peer network.
Full Text Chapter Download: US $37.50 Add to Cart
eContent Pro Discount Banner
InfoSci OnDemandECP Editorial ServicesAGOSR