Cases on Semantic Interoperability for Information Systems Integration: Practices and Applications

Cases on Semantic Interoperability for Information Systems Integration: Practices and Applications

Yannis Kalfoglou (RICOH Europe plc and University of Southampton, UK)
Indexed In: SCOPUS View 1 More Indices
Release Date: October, 2009|Copyright: © 2010 |Pages: 376
DOI: 10.4018/978-1-60566-894-9
ISBN13: 9781605668949|ISBN10: 160566894X|EISBN13: 9781605668956|ISBN13 Softcover: 9781616923983
  • Free shipping on orders $395+
  • Printed-On-Demand (POD)
  • Usually ships one day from order
  • 20% discount on 5+ titles*
(Multi-User License)
List Price: $180.00
10% Discount:-$18.00
  • Multi-user license (no added fee)
  • Immediate access after purchase
  • No DRM
  • ePub with PDF download
  • 20% discount on 5+ titles*
Hardcover +
(Multi-User License)
  • Free shipping on orders $395+
  • Printed-On-Demand (POD)
  • Usually ships one day from order
  • Multi-user license (no added fee)
  • Immediate access after purchase
  • No DRM
  • ePub with PDF download
  • 20% discount on 5+ titles*
(Individual Chapters)
  • Purchase individual chapters from this book
  • Immediate PDF download after purchase or access through your personal library
  • 20% discount on 5+ titles*
Description & Coverage

Semantic interoperability provides the means to automatically process and integrate large amounts of information without human intervention.

Cases on Semantic Interoperability for Information Systems Integration: Practices and Applications provides an in-depth analysis of issues involved with the application of semantic interoperability to information assimilation tasks followed by field professionals. This significant collection of research explains in-depth issues involved the integration of large amounts of heterogeneous information and points to deficiencies of current systems.


The many academic areas covered in this publication include, but are not limited to:

  • Geospatial Semantic Web
  • Ontological stance
  • Semantic extraction and annotation
  • Semantic Interoperability
  • Semantic mediation
  • Semantic peer-to-peer system
  • Semantic synchronization
  • Service integration
  • Streamlining semantic integration systems
  • Structure-preserving semantic matching
Reviews and Testimonials

This book brings together some of the best current thinking on the conceptual foundations and domain-specific practices of semantic interoperability. It is an essential resource for all those concerned with resolving the problem of semantic heterogeneity.

– Peter E. Hart, Ricoh Innovations, Inc., California

This book looks at ways in which theoreticians and engineers have managed, so far, to come to terms with heterogeneity.

– Dave Robertson, University of Edinburgh, UK
Table of Contents
Search this Book:
Editor Biographies
Yannis Kalfoglou is a technology innovation consultant with RICOH Europe Plc, and a visiting senior research fellow with the University of Southampton. He has published extensively in the field of semantic interoperability and integration and he is a pioneer in ontology mapping technology. He holds a PhD in artificial intelligence from the University of Edinburgh and several years post doctoral experience in the field of artificial intelligence, knowledge engineering and management and the semantic web. He participated in national and international funding programs on the use of AI technologies on the Web. He led industrially funded projects on the provision of services in the field of semantic interoperability. He participates in several program committees for national and international research consortia and he has consulted venture capitalist funds on the use of semantic technologies.
Peer Review Process
The peer review process is the driving force behind all IGI Global books and journals. All IGI Global reviewers maintain the highest ethical standards and each manuscript undergoes a rigorous double-blind peer review process, which is backed by our full membership to the Committee on Publication Ethics (COPE). Learn More >
Ethics & Malpractice
IGI Global book and journal editors and authors are provided written guidelines and checklists that must be followed to maintain the high value that IGI Global places on the work it publishes. As a full member of the Committee on Publication Ethics (COPE), all editors, authors and reviewers must adhere to specific ethical and quality standards, which includes IGI Global’s full ethics and malpractice guidelines and editorial policies. These apply to all books, journals, chapters, and articles submitted and accepted for publication. To review our full policies, conflict of interest statement, and post-publication corrections, view IGI Global’s Full Ethics and Malpractice Statement.



Today’s information systems are becoming too complex. Managing their complexity has been a theme of interest and considerable contribution from a variety of communities, practitioners and enterprises since the mid eighties. This resulted in information systems that can manage massive amounts of data, complex transactions and serve a multitude of applications across global networks. Nowadays, however, we face a new challenge that undermines information systems’ productivity: heterogeneity. As information systems become more distributed and open to serve the needs of an ever increasing distributed IT environment, heterogeneity, which is naturally inherited in independently constructed information systems, comes across as an obstacle. The crux of the problem with heterogeneity is that it impedes productivity as it makes integration of information systems difficult. Integration of information systems is deemed as a prerequisite for operating in different market segments, geographic regions and across different information systems. The properties of distributiveness, seamless connectivity, and omnipresence of information in a global scale are the norm in today’s business environment and information systems have to support and enable them.

Heterogeneity occurs in many levels in an information system; however, the most important source of heterogeneity is in the conception, modelling and structuring of information. There are cultural, regional, and organisational variations of the same piece of information. This result in heterogeneity: it is common to encounter different forms of information regarding the same entity of interest across different information systems. To overcome this burden, interoperability solutions have been proposed and used extensively in the past three decades. From the early days of Electronic Data Interchange to XML solutions in recent times, interoperability solutions aim to glue together heterogeneous information data sets and increase productivity. Despite the plethora of interoperability solutions though, we observe that most are focussed on one aspect of heterogeneity: syntax. That is, they tackle syntactic variations in the representation of information.

Semantic interoperability

However, as systems become more distributed and disparate within and across organisational boundaries and market segments, there is a need to preserve the meaning of entities used in everyday transactions that involve information sharing. In order for these transactions to be successful we need to be able to uncover and expose the semantics of the elements taking part in these transactions. That goes beyond detection of simple syntactic variations and tackles the issue of information entities expressed in syntactically similar manner but have a different meaning. Solutions that take into account the meaning of information entities are often characterised as semantic integration or semantic interoperability. Semantic interoperability and integration is concerned with the use of explicit semantic descriptions to facilitate information and systems integration. Semantic technologies, primarily ontologies and the use of Semantic Web, provide the means to attach meaning to conventional concepts. That makes it possible to automatically process and integrate large amounts of information without human intervention.

But, there exist various perceptions of what semantic interoperability and integration stands for. These notions are much contested and fuzzy and have been used over the past decade in a variety ways. Moreover, as reported in (Pollock, 2002), both terms are often used indistinctly, and some view them as the same thing.

The ISO/IEC 2382 Information Technology Vocabulary defines interoperability as ``the capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units.''. The ISO/IEC 14662 IT Open EDI reference model International Standard emphasizes the importance of agreed semantics: “user groups have identified the need to agree on information models at the semantic level before agreeing on specific data structures and data representations to be exchanged between parties”, and further on the benefits of agreed semantics: “agreement on semantics of data is needed to reconcile and co-ordinate different data representations used in different industry sectors; agreed information models allow completeness and consistency controls on data exchanged between parties.”.

In a debate on the mailing list of the IEEE Standard Upper Ontology working group, a more formal approach to semantic interoperability was advocated: use logic in order to guarantee that after data were transmitted from a sender system to a receiver, all implications made by one system had to hold and be provable by the other, and there should be a logical equivalence between those implications . With respect to integration, Uschold and Gruninger argued that ``two agents are semantically integrated if they can successfully communicate with each other'' and that ``successful exchange of information means that the agents understand each other and there is guaranteed accuracy'', (Uschold & Gruninger, 2002).

Impact and outreach

Interoperability has also been the focus of high profile Governmental, industrial and scientific endeavours in recent years. For example, the US Government Healthcare Information Technology Enterprise Integration Act calls for interoperability solutions to increase efficiency and productivity of heterogeneous healthcare IT systems. Similarly, the European Commission is working on a number of initiatives for the knowledge economy, like the well publicised i2010 initiative, and interoperability is a focal point of that initiative as it aims to achieve interoperable systems for eGovernment services across Europe. The IDABC (Interoperable Delivery of European eGovernment Services to public Administrations, Business and Citizens) programme and the SEMIC.EU platform are two exemplars of interoperability work in Europe’s public sector. According to David White, Director, European Commission, Enterprise and Industry Directorate General: “the realization of i2010 goals will very much depend on platforms, services and applications being able to talk to one another and to build an economic activity on the information retrieved. This is what we understand as interoperability. It is complex, not limited to the infrastructure level but encompasses semantic interoperability, organisational interoperability, and even regulatory interoperability”. In sciences, the high profile E-Science programme praises the role and highlights the need for interoperability: “Interoperability is key to all aspects of scale that characterize e-Science, such as scale of data, computation, and collaboration. We need interoperable information in order to query across the multiple, diverse data sets, and an interoperable infrastructure to make use of existing services for doing this.", Hendler, (2004).

Financially, the value of interoperability has been praised recently by (Schrage, 2009) “Look no further than the internet for the inspiration for interoperable innovation. The misunderstood genius of the internet is that interoperability makes "networks of networks" possible. Protocols permitting diverse data to mingle creatively explain why the internet's influence as a multimedia, multifunctional and multidisciplinary environment for innovation remains unsurpassed.” . Although the “data mingle” that Schrage calls for is evident in today’s large data mashup ventures, the design and execution of protocols to support such activity are hard to build.

Solutions and trends

Semantic interoperability solutions are using a variety of technologies; however, the dominant paradigm is to use technologies that are geared toward the management and exploitation of codified semantics, often attached to information entities as descriptors. Such technology is the ontology. According to the infamous quote from (Gruber, 1995), an ontology is “a shared conceptualisation of a specification” and it aims to enable knowledge sharing. Early ontology work suggested that they are suitable for achieving interoperability between disparate systems. In the mid nineties, the seminal article from Uschold and Gruninger provided supportive evidence of this claim (Uschold & Gruninger, 1996).

As we can see from that Figure, the presence of an ontology makes it possible for two disparate systems (in this example, a method library and a procedure viewer) to communicate, and ultimately share knowledge albeit they use different vocabularies. This has been the dominant approach in the nineties. It has been applied to some of the long lasting knowledge sharing projects, as well as to a plethora of smaller knowledge sharing tasks. It is effective, once the ontology is up and running, and evidently has a knock-on effect on sharing and design costs (Uschold, 1998). However, it is not efficient: designing the "perfect” ontology that will accommodate all needs is not an easy task. There are irreconcilable arguments among engineers about how and what knowledge should be modelled when trying to build a comprehensive ontology for a given domain. Even when an overcommitted group normally resolves the disputed issues and releases the ontology, there are often inappropriate interpretations of its constructs by users or simply lack of appropriate tools to reason over it.

We also observe changes in the environment and practice of knowledge and ontology engineering: ontologies have transformed from a Knowledge Representation experiment in the Artificial Intelligence community in the early nineties, to a mainstream technology that transcends community boundaries and increasingly penetrates the commercial world. Furthermore, the emergence of the Semantic Web, made it possible to publish and access far more ontologies than knowledge engineers ever thought that it would be possible to build! Consequently, ontologies proliferated and made publicly available and accessible by large audiences. This brought forward a number of issues regarding scalability, authoring, deployment, and most importantly: interoperability of ontologies themselves. This is different from having a single, consensual ontology upon which interoperability will be based and engineers have to work out on how their systems will communicate with that ontology. Hence, there is a call for ontology to ontology interoperability, which includes the acknowledged problem of ontology mapping (Kalfoglou & Schorlemmer, 2003).

The road ahead

Semantic interoperability solutions that tackle heterogeneity will continue to thrive and attract interest from a variety of communities. In fact, the diversity of solutions and techniques used is one of the most fascinating factors of semantic interoperability work. Ontologies, Semantic Web techniques, syntax matching algorithms, natural language processing, text engineering, machine learning and standards are all used in one way or another to tackle heterogeneity and enable automation in information systems integration. This book is carefully structured to present this variety of solutions from an engineering and practicing perspective. This also reflects a recent trend in semantic interoperability solutions: a holistic approach that draws on best practices from a variety of fields and technologies is often preferred over a vertical approach that focuses on a specific technology. For example, a heavy weight knowledge engineering approach with formal ontologies and automated inference can greatly benefit from light weight text engineering techniques and serendipitous knowledge acquisition utilizing Web 2.0 practices.

Organisation of the book

The book contains 14 well presented cases on the use of semantic interoperability for a variety of applications ranging from manufacturing to tourism, e-commerce, energy Grids’ integration, geospatial systems interoperability and automated agents interoperability for web services. The book is split in two main sections; section A covers novel concepts for engineering semantic interoperability solutions whereas section B focuses on domain specific interoperability solutions.

Novel concepts for engineering semantic interoperability solutions

This section introduces novel concepts advocated for engineering semantic interoperability solutions: entity centric design of semantic interoperability solutions; ontological stance as an operational characterisation of the intended models used for enabling interoperability; the role of message structures and semantic mediation; structure preserving semantic matching; unified frameworks for streamlining multiple semantic integration solutions; semantic peer-to-peer design principles; and the role of quality frameworks in the design of semantic interoperability solutions.

In Chapter I, “Entity-centric Semantic Interoperability”, Paolo Bouquet, Heiko Stoermer, Wojcech Barczyñski and Stefano Bocconi introduce the concept of entity centric semantic interoperability. The authors distinguish two distinct approaches to semantic interoperability: schema centric and entity centric. The former, is by far the most popular and tackles the problem of heterogeneity by finding alignments and mappings across heterogeneous schemata which are used to structure information. In contrast, the latter, focuses on the identification and manipulation of the entities that made up the information space, and it is less investigated. The authors point to issues that hinder interoperability: (a) a proliferation of identifiers is taking place, because the same object is potentially issued with a new identifier in several information systems, therefore, applications need to keep track of a growing amount of identifiers; (b) reference to entities across information systems is very complicated or impossible, because there are no means to know how an entity is identified in another system; (c) injectivity of identifiers is in general not guaranteed, since the same identifier can denote different entities in different information. Bouquet and colleagues argue that for large scale, heterogeneous information systems, entity centric semantic interoperability solutions are better suited due to unprecedented proliferation of uniquely identified entities which do not necessarily adhere to a predefined schema. The authors also elaborate on how entity centric solutions to the semantic heterogeneity problem can support various forms of interoperability without the need to achieve schema level interoperability. The supporting case is drawn from the European Commission funded consortium OKKAM with focus on two applications in the domains of corporate information management and online publishers’ information extraction and annotation; the exemplar applications are drawn from SAP and Elsevier, respectively. The authors put the idea of entity centric semantic interoperability into practice using a proposed infrastructure, Entity Name System (ENS), built under the auspices of the OKKAM consortium.

In Chapter II, “The Ontological Stance for a Manufacturing Scenario”, Michael Gruninger presents the notion of ontological stance. When software applications communicate with each other, there needs to be some way to ensure that the meaning of what one application accepts as input and output is accurately conveyed to the other application. Since the applications may not use the same terms to mean the same concepts, we need a way for an application to discover what another application means when it communicates. In order for this to happen, every application needs to publicly declare exactly what terms it is using and what these terms mean. This meaning should be encoded in a formal language which enables a given application to use automated reasoning to accurately determine the meaning of other applications’ terms. The author argues that in order to achieve complete semantic integration, we need to enable the automated exchange of information between applications in such a way that the intended semantics of the applications’ ontologies are preserved by the receiving application. The ontological stance is an operational characterization of the set of intended models for the application’s terminology. When the ontological stance methodology is used in conjunction with an interlingua ontology it is possible to axiomatize the implicit ontologies of software applications. Gruninger presents a set of compelling example cases drawn from manufacturing interoperability scenarios: ILOG Scheduler, SAP ERP data model and the process planning software MetCAPP. The interlingua ontology used was the Process Specification Language (PSL). One of the benefits of such an approach is that we can achieve correctness and completeness of the axiomatization with respect to the intended models of the application ontology since the interlingua ontology is verified. The correctness and completeness of the axiomatization with respect to the software application, as well as the correctness and completeness of the semantic mappings, is demonstrated through the use of the ontological stance.

In Chapter III, “Use of Semantic Mediation in Manufacturing Supply Chains”, Peter Denno, Edward J. Barkmeyer and Fabian Neuhaus argue for the use of semantic mediation to achieve semantic interoperability. The authors elaborate on the central role that system engineering processes and their formalisation and automation play in facilitating semantic interoperability. Systems engineering is any methodical approach to the synthesis of an entire system that (a) defines views of that system that help elicit and elaborate requirements, and (b) manages the relationship of requirements to performance measures, constraints, risk, components, and discipline-specific system views. The authors continue that providing semantic interoperability solutions can be achieved by reconciling differences of viewpoint that may be present in the system components whose joint work provides a system function. Reconciling differences in viewpoint that are exposed in component interfaces is a systems engineering task. Consequently, Denno and colleagues do not link semantic interoperability to models and truth conditions, but to behaviour or lack of intended behaviour: the absence of semantic interoperability between components is the inability to achieve some joint action of the systems components which is the result of the inability of a component to respond with the intended behaviour if provided by the appropriate message. Thus, the essential form of their solution entails a relation of message structure elements to elements of ontologies. The authors present three exemplar research projects in the domain of semantic mediation.

In Chapter IV, “Service Integration through Structure-preserving Semantic Matching”, Fiona McNeill, Paolo Besana, Juan Pane and Fausto Giunchiglia present the structure preserving semantic matching algorithm (SPSM). The domain of application is integration of services. The authors argue that in large, open environments such as the Semantic Web, huge numbers of services are developed by vast numbers of different users. Imposing strict semantics standards in such an environment is useless; fully predicting in advance which services one will interact with is not always possible as services may be temporarily or permanently unreachable, may be updated or may be superseded by better services. In some situations, characterised by unpredictability, the best solution is to enable decisions about which services to interact with to be made on-the-fly. To achieve that, McNeill and colleagues propose a method that uses matching techniques to map the anticipated call to the input that the service is actually expecting. This must be done during run-time, which is achievable with their SPSM algorithm. Their algorithm underpins a purpose built system for service interaction that facilitates on-the-fly interaction between services in an arbitrarily large network without any global achievements or pre-run-time knowledge of who to interact with or how interactions will proceed. Their work is drawn from the European Commission funded consortium OpenKnowledge and the system has been evaluated in an emergency response scenario: a flooding of the river Adige in the Trentino region of Italy.

In Chapter V, “Streamlining semantic integration systems”, Yannis Kalfoglou and Bo Hu argue for the use of a streamlined approach to integrate semantic integration systems. The authors elaborate on the abundance and diversity of semantic integration solutions and how this impairs strict engineering practice and ease of application. The versatile and dynamic nature of these solutions comes at a price: they are not working in sync with each other neither is it easy to align them. Rather, they work as standalone systems often leading to diverse and sometimes incompatible results. Hence the irony that we might need to address the interoperability issue of tools tackling information interoperability. Kalfoglou and Hu also report on an exemplar case from the field of ontology mapping where systems that used seemingly similar integration algorithms and data, yield different results which are arbitrary formatted and annotated making interpretation and reuse of the results difficult. This makes it difficult to apply semantic integration solutions in a principled manner. The authors argue for a holistic approach to streamline and glue together different integration systems and algorithms. This will bring uniformity of results and effective application of the semantic integration solutions. If the proposed streamlining respects design principles of the underlying systems, then the engineers will have maximum configuration power and tune the streamlined systems in order to get uniform and well understood results. The authors propose a framework for building such streamlined system based on engineering principles and an exemplar, purpose built system, CROSI Mapping System (CMS), which targets the problem of ontology mapping.

In Chapter VI, “Sharing resources through ontology alignments in a semantic peer-to-peer system” Jérôme Euzenat, Onyeari Mbanefo and Arun Sharma present an application of peer-to-peer to enable semantic interoperability. The authors point to the bootstrapping problem of the Semantic Web: benefit will emerge when there is enough knowledge available; however, people are not willing to provide knowledge if this will not return immediate benefits. To overcome this problem, Euzenat and colleagues propose a semantic peer-to-peer system in which users can start develop, locally, the annotation scheme that suits them best. Once this is done, then they can offer their resources to their friends and relatives through peer-to-peer sharing. Then using global social interaction infrastructures, like Web 2.0 applications, the body of knowledge can quickly spread, thus overcoming the bootstrapping problem. However, the authors argue that heterogeneity can occur as resources, often described in ontologies, are developed locally and are prone to terminological variations across peers. The remedy to this problem is to use semantic alignment in conjunction with a native peer-to-peer system. The authors describe a working example of this in the context of the PicSter system, an exemplar case for heterogeneous semantic peer-to-peer solution. PicSter is a prototype for ontology bases peer-to-peer picture annotation that allows to users’ needs the ontologies used for annotation.

In Chapter VII, “Quality-Driven, Semantic Information System Integration – The QuaD2-Framework” Steffen Mencke, Martin Kunz, Dmytro Rud and Reiner Dumke elaborate on the need for a framework for quality driven assembly of entities. The authors argue that although automation is an important aspect of integration solutions, quality must be of substantial interest as it is an inherent characteristic of any product. Mencke and colleagues continue, that existing quality related information can be reused to optimize aggregation of entities by taking into account different characteristics like quality attributes, functional requirements or the ability for automated procedures. To this end, the authors propose a quality driven framework for assembly of entities. The QuaD2 framework is a first attempt to provide a holistic consideration of quality and functional requirements with a substantial semantic description of all involved elements. This could enable an automated procedure of entity selection and execution on one hand and a substantial support of quality evaluation of involved entities on the other hand. The presented quality driven approach proposes the usage of semantic descriptions for process automation and supports different quality models and quality attribute evaluations. The easy extensibility of process models, entities, interfaces and quality models makes the presented framework deployable for many fields of application. The authors present a comprehensive application case of the QuaD2 framework in the domain of e-Learning.

Domain specific semantic interoperability practices

This section describes semantic interoperability uses in a variety of domains: medical domain for the annotation of images and management of patient data, standards interoperability with respect to the electrotechnical domain, e-commerce with emphasis on business-to-business transactions, annotation and extraction of web data, e-tourism with annotation and integration of heterogeneous e-tourism resources, and geospatial information sharing.

In Chapter VIII, “Pillars of Ontology Treatment in the Medical Domain”, Daniel Sonntag, Pinar Wennerberg, Paul Buitelaar and Sonja Zillner elaborate on a case study in the domain of medical image annotation and patient data management. Their work is drawn from a large scale nationally funded project, THESEUS MEDICO. The objective of this large scale collaborative project is to enable seamless integration of medical images and different user applications by providing direct access to image semantics. Semantic image retrieval should provide the basis for help in clinical decision making support and computer aided diagnosis. In particular, clinical care increasingly relies on digitized patient information. There is a growing need to store and organise all patient data, such as health records, laboratory reports and medical images, so that they can be retrieved effectively. At the same time it is crucial that clinicians have access to a coherent view of these data within their particular diagnosis or treatment context. The authors argue that with traditional applications, users may browse or explore visualized patient data, but little help is given when it comes to the interpretation of what is being displayed. This is due to the fact, the authors continue, that the semantics of the data is not explicitly stated. Sonntag and colleagues elaborate on how this can be overcome by the incorporation of external medical knowledge from ontologies which provide the meaning of the data at hand. The authors present a comprehensive case from the MEDICO project and elaborate on the lessons learnt: that only a combination of knowledge engineering, ontology mediation methods and rules can result in effective and efficient ontology treatment and semantic mediation. They argue that clinician’s feedback and willingness to semantically annotate images and mediation rules play a central role.

In Chapter IX, “A Use Case for Ontology Evolution and Interoperability: The IEC Utility Standards Reference Framework 62357”, Mathias Uslar, Fabian Grüning and Sebastian Rohjans present a case of semantic interoperability in the electricity generation domain with emphasis on the uses and evolution of the International Electrotechnical Commission (IEC) framework 62357. The authors argue that the landscape for electricity delivery has changed a lot in recent times and with the upcoming distributed power generation, a lot of interoperability challenges lie ahead. In particular, deploying new generation facilities like wind power plants or fuel cells, energy is fed into the grid at different voltage levels and by different producers. Therefore, Uslar and colleagues argue, the communication infrastructure has to be changed: the legal unbundling leads to separation of systems, which have to be open to more market participants and this results in more systems that have to be integrated and more data formats for compliance with the market participants. The authors then focus on a key aspect of streamlining integration in this domain, the IEC standards framework, a working practice in today’s electricity generation domain. They introduce a specific methodology, COLIN, whose aim is to overcome problems with adoption of IEC family of standards and the heterogeneity of different providers. COLIN uses mediator ontologies and the CIM OWL ontology as a basic domain and upper ontology. Uslar and colleagues show an example case from integrating the IEC 61970 and IEC 61850 standards family.

In Chapter X, “Semantic Synchronization in B2B-Transactions“, Janina Fengel, Heiko Paulheim and Michael Rebstock elaborate on how to provide dynamic semantic synchronisation between business partners using different e-business standards. Their approach is based on ontology mapping with interactive user participation. The need to achieve semantic integration arose from the wish to enable seamless communication through the various business information systems within enterprises and across boundaries. This need is becoming more acute in a business context where a company needs to dynamically work in an ad-hoc fashion with changing partners in a global market. In such an environment it is time and cost-wise impossible, the authors argue, to provide the ramp-up efforts for predefining each information exchange before being able to conduct a business. So, Fengel and colleagues state that interoperability of the information to be exchanged is required as the basis for focusing on the business instead of concentrating on technological issues. To this end the authors developed the ORBI (Ontology Based Reconciliation for Business Integration) Ontology Mediator, which is a scalable solution to providing semantic synchronisation. The authors linked this work with a system for partition-based ontology matching. The proposed semantic synchronization approach is used in an example case with the e-business negotiation system, M2N.

In Chapter, XI, “XAR: An Integrated Framework for Semantic Extraction and Annotation”, Naveen Ashish and Sharad Mehrotra present the XAR framework that allows for free text information extraction and semantic annotation. The language underpinning XAR, Ashish and Mehrotra argue, allows for the inclusion of probabilistic reasoning with the rule language, provides higher level predicates capturing text features and relationships, and defines and supports advanced features such as token consumption and stratified negotiation in the rule language and semantics. The XAR framework also allows the incorporation of semantic information as integrity constraints in the extraction and annotation process. The XAR framework aims to fill in a gap, the authors claim, in the web based information extraction systems. XAR provides an extraction and annotation framework by permitting the integrated use of hand-crafted extraction rules, machine-learning based extractors, and semantic information about the particular domain of interest. The XAR system has been deployed in an emergency response scenario with civic agencies in North America and in a scenario with an IT department of a county level community clinic.

In Chapter XII, “CRUZAR: an application of semantic matchmaking to eTourism”, Iván Mínguez, Diego Berrueta and Luis Polo present the CRUZAR system which is used for semantic matchmaking in the e-Tourism domain. CRUZAR is a web application that uses expert knowledge in the form of rules and ontologies and a comprehensive repository of relevant data to build custom routes for each visitor profile. It has been deployed in collaboration with the city council in the Spanish city of Zaragosa. The authors use semantic technologies to integrate and organise data from different sources, to present and transform user profiles and tourism profiles and to capture all the information about the generated routes and their constraints. Since the information sources originate from different providers, a problem of heterogeneity occurs and CRUZAR uses a matchmaking algorithm to merge all the relevant information in a consistent user interface. Minguez and colleagues elaborate on the lessons learnt from using semantic web technologies at CRUZAR and their experiences with the pilot application at the city of Zaragosa tourism services.

In Chapter XIII, “Data Integration in the Geospatial Semantic Web”, Patrick Maue and Sven Schade present insight and applications of semantic integration in the domain of geospatial information systems. In particular, the authors elaborate on the heterogeneities that occur with multiple resources of geospatial information and argue that it impairs discovery of new information and decreases the usefulness of geographic information retrieval; a notion that summarizes all the tasks needed to acquire geospatial data from the web using search or catalogue data. Maue and Schade describe their solution which is drawn from the European Commission funded consortium, SWING, and aims to facilitate discovery, evaluation and usage of geographic information on the internet. The authors present an example case from BRGM, the French geological survey which is responsible for the exploration and management of potential quarry sites in France. The authors also present their findings on deploying semantic web technologies to support geospatial decision making.

In Chapter IXV, “An ontology based GeoDatabase Interoperability Platform”, Serge Boucher and Esteban Zimanyi present a platform for interoperating geographical information sources. The authors elaborate on the lessons learnt from deploying a prototype system to tackle interoperability needs between different cartographic systems from the domain of military operations. Boucher and Zimanyi provide an analysis of the issues involved with integrating heterogeneous data silos from a cost-effective perspective and the beneficial role that knowledge representation could play in the field of geographical databases.

    Yannis Kalfoglou,
    RICOH Europe plc and University of Southampton, UK