Semantic Web and Adaptivity: Towards a New Model

Semantic Web and Adaptivity: Towards a New Model

Jorge Marx Gómez, Ammar Memari
DOI: 10.4018/978-1-60566-650-1.ch027
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The chapter aims at proposing a model that gives an abstraction to the functionalities and data involved in adaptive applications for the Semantic Web. As the quantity of provided information on the Web is getting larger, the need for adaptation in software is getting more and more necessary in order to maximize the productivity of individuals, and more issues are emerging that have to be considered in the new generation of hypertext systems. With the advent of Semantic Web, adaptation can be performed autonomously and in runtime, making the whole process of information adapting transparent to the user.
Chapter Preview
Top

Introduction And Motivation

Recent development within the Semantic Web community suggests that the Internet, or the WWW, as we know it is about to change; the content and the services will be annotated with meta data which can describe and define them.

Traditionally, the Web is seen as a collection of linked nodes, as entailed by the specification of the reference model for hypertext applications, the Dexter Model (Halasz & Schwartz, 1994). This model has long been successful in abstracting the applications that could deliver resources to the Web for human users to use.

However, the supply of information is steadily increasing, and today’s Web is the place for expressing ideas, telling stories, blogging, sharing movies photos and sound clips…, anything that anybody ever wanted to say, so the huge amount of knowledge available to any one person is far more than they can possibly absorb (Bailey, 2002) and exposing the human brain to such a big pile of information will cause the “Lost in the Hyperspace” problem, when the useful information remains unfound because it is hidden under a huge amount of useless and irrelevant data.

Fortunately as this supply of information increases, so does also the automatic information processing capabilities. So, there is and will be great potentials to make use of these automation capabilities in order to extract from the overflow of the Web the information and services relevant to the user on an ad-hoc basis, and deliver them over a standardized user interface. The retrieval of data in this adaptive manner gains more and more importance as the mass of provided information grow larger.

We can notice that the portion of the information accessed by any individual on the internet, no matter how large it is, is so small and nearly negligible when compared to the full amount of information available. What if we were able to define this portion of information? Or draw the border of it?

The process of moving from the information domain to the interest domain is similar to moving from the time domain to the frequency domain through Fourier transformation.

Efforts to abstract the functionality and data representation of hypermedia systems in a model, or furthermore a reference model, were conducted. Two of them are mostly used, The Dexter Hypertext Reference model, and the model of the World Wide Web, which is slightly different from Dexter.

These Models can no longer abstract the functionalities and data required for adaptive applications for the Semantic Web. Many efforts were made to come up with a model that extends the Dexter model with adaptive functionalities but most of them carried along some of the limitations of Dexter, and some others concentrated on the static structure of the Semantic Web not on the wider spectrum of both static and dynamic relations between the knowledge and the consumer (see (Memari & Marx-Gomez, 2008)). In fact even the Dexter Model had some aspects of adaptivity, but it had them unintentionally, and they are definitely not enough (Dodd, 2008).

Key Terms in this Chapter

Reference Model: An abstract representation of the entities and relations within a problem space; it forms the conceptual basis to derive more concrete models from which an implementation can be developed.

Semantic Web: An extension of the World Wide Web in which the semantics of the offered informational and transactional resources are provided and represented in a machine-understandable manner.

Adaptation: A process that includes in general the selection of relevant items from a set of items in a way suitable for given requirements or environment conditions.

Agency Layer: Is a layer in our proposed model which has standardized interfaces with the upper and lower layers and contains the activities of a wide variety of software agents.

Hypertext: Is a concept that is basically a text which contains links to other text.

Hypermedia: Is a term used for hypertext which is not constrained to be text, it can include graphics, video and sound.

Multi-Agent System (MAS): Is a system composed of multiple interacting intelligent agents, such a system can be used to solve problems that are impossible for any agent solely to solve.

Complete Chapter List

Search this Book:
Reset