Insights into Advancements in Intelligent Information Technologies: Discoveries

Insights into Advancements in Intelligent Information Technologies: Discoveries

Vijayan Sugumaran (Oakland University, USA)
Release Date: February, 2012|Copyright: © 2012 |Pages: 364|DOI: 10.4018/978-1-4666-0158-1
ISBN13: 9781466601581|ISBN10: 1466601582|EISBN13: 9781466601598
List Price: $195.00
20% Discount:-$39.00
List Price: $195.00
20% Discount:-$39.00
Hardcover +
List Price: $235.00
20% Discount:-$47.00
(Individual Chapters)
No Current Special Offers


Intelligent Information Technologies are vital to businesses, hospitals, research facilities, and any number of institutions that rely on the infrastructure provided therein to remain current within their respective fields, and keep their access to data secure and efficient.

Insights into Advancements in Intelligent Information Technologies: Discoveries offers the latest the field has to offer in research, methodologies, frameworks, and advances in the field. Within the various business enterprises that share resources and compete for a market share, information, and its uses are crucial to growing and furthering their burgeoning needs. This volume has collected case studies and research from around the globe in the most updated collection of research within the field to date.

Topics Covered

The many academic areas covered in this publication include, but are not limited to:

  • Artificial Intelligence
  • Autonomous Systems
  • Bioinformatics
  • Data Modeling
  • Database Technologies
  • Metadata
  • MIS
  • Privacy and Security
  • Recommendation Systems
  • Retrieval Engines

Table of Contents and List of Contributors

Search this Book:



Multi-agent systems and semantic technologies have been recognized as one of the important Information Technologies to minimize the cognitive load on decision makers and help promote interoperability between various systems to gain access to necessary data/information in collaborative problem solving. Significant progress has been made over the last few years in the development of multi-agent systems and Semantic Web applications in a variety of fields such as electronic commerce, supply chain management, resource allocation, intelligent manufacturing, mass customization, simulation, and healthcare. While research on various aspects of multi-agent systems and semantic technologies is progressing at a fast pace, there are still a number of issues that have to be explored in terms of the design and implementation of multi-agent systems for decision support. For example, formal approaches for agent-oriented modeling and simulation for decision making, ontology based Information System, ontology engineering, semantics for data integration, agent based decision support systems, multi-agent systems for business intelligence, and semantic technologies for knowledge management are some of the areas in need of further research.

Intelligent agent and multi-agent systems are increasingly being employed in various web applications, particularly in searching for information on the Web. Semantic Web, the next generation Web technology, is supposed to improve this information retrieval process as well as help execute various tasks using intelligent agents. For example, a software agent would gather all the necessary information from a multitude of sources in order to support the user in a problem solving task. A fundamental problem in the current Internet search mechanisms is the vagueness of user’s information needs. Search queries on the Internet are rarely longer than 2-3 terms, and a search session tends to consist of 6-7 queries. The more advanced search options are only used by a small fraction of users. The search applications have very little information about the documents the users are looking for. Even in those cases where longer queries are posted, current search applications cannot uncover and address the user’s real information needs. The terms are only understood as key words that are matched against term frequencies in documents, and there is no attempt at understanding their meanings or how they are related to each other.

The idea of semantic search is to use precisely defined domain vocabularies – ontologies – to interpret user queries and retrieve documents based on content rather than term matching. It is however unclear how this can be best done in a large-scale search environment. Building semantic indices seems unfeasible due to space and time requirements, and it is not reasonable to ask users to post semantically defined queries. This would require the users to browse through potentially thousands of concepts that he or she might not even understand and select the appropriate ones. Most systems today use ontologies to expand or reformulate the queries without much involvement from the user. Thus, these systems do not take into account the intent and the information needs of the user.

From the users’ point of view, the standard query reformulation approach is adverse for several reasons. First, since the ontology is used as a standardized vocabulary for all users, we cannot be sure that they reflect the exact terminology used by any individual user. If the queries are altered without user’s knowledge, the user has no opportunity to correct the system’s understanding of his or her information needs. Second, users are often vague because they are not entirely sure about what they are searching for. The ontology provides a conceptual summary of the domain and could help understand his or her needs, but it is often too large and too complex to be presented directly to the user. What is needed is a more interactive way of semantically exploring the user’s information needs, where the user and the system collaborate to uncover the desired information. 

Novel approaches to Web searching are being developed where the search framework supports semantic query interpretation and expansion and allows users to interactively drill down the result set for the information they need. Central to this approach is the definition of ontological profiles. An ontological profile is an enriched ontology, in which each class, instance and relationship is given a weight that characterizes its prominence with respect to the documents or logs at hand. For example, the profile may reveal that a certain user group uses Thinkpad and Vaio synonymously with the concept of laptops, but with different probabilities and different links to other concepts. Moreover, we may use the weights to rank the user group’s references to instances of laptop, thereby generating a list of their most popular laptop models. Constructed with reference to the result set from a search machine, the profile provides a semantically ranked summary of all the retrieved documents. Similarly, an ontological profile based on query logs gives us a semantic understanding of the language used by users.

These approaches, for example, take an existing OWL ontology for a domain and build ontological profiles that reflect the users’ language and the content of a representative set of Web documents from the domain. Techniques from ontology learning will be evaluated and adapted for this purpose. During query processing, the user queries are first mapped onto ontological concepts using the profile and thereafter expanded with the terms referring to the concepts in the standard document index. This results in a semantic query expansion approach that also takes into account the fact that the user may utilize different terminology compared to the authors of the documents. Thus, the outcome of this stream of research is an integrated search environment that makes use of existing ontologies and an existing search engine. This environment includes text mining components for generating ontological profiles, a query interpretation module for query expansion, and an interactive visualization module for presenting semantic maps and allowing the user to browse the map and produce more refined semantic queries.


In Chapter 1, “Generating Knowledge-Based System Generators: A Software Engineering Approach” by Sabine Moisan, investigates software engineering techniques for designing and reengineering knowledge-based system generators, focusing on inference engines and domain specific languages. Indeed, software development of knowledge-based systems is a difficult task. The author chose a software engineering approach to favor code reuse, evolution, and maintenance. Moisan proposes a software platform named LAMA to design the different elements necessary to produce a knowledge-based system. This platform offers software toolkits (mainly component frameworks) to build interfaces, inference engines, and expert languages. The author has used the platform to build several KBS generators for various tasks (planning, classification, model calibration) in different domains. The approach appears well fitted to knowledge-based system generators; it allows developers a significant gain in time, as well as it improves software readability and safeness. 

Authors Ilaria Baffo, Giuseppe Confessore, and Graziano Galiano, in Chapter 2, “A Model to Increase the Efficiency of a Competence-Based Collaborative Network,” provide a model based on the Multi Agent System (MAS) paradigm that acts as a methodological basis for evaluating the dynamics in a collaborative environment. The model dynamics is strictly driven by the competence concept. In the provided MAS, the agents represent the actors operating on a given area. In particular, the proposed agents are composed of three distinct typologies: (i) the territorial agent, (ii) the enterprise agent, and (iii) the public agent. Each agent has its local information and goals, and interacts with others by using an interaction protocol. The decision-making processes and the competencies characterize in a specific way each one of the different agent typologies working in the system. 

Chapter 3, “Algorithm for Decision Procedure in Temporal Logic Treating Uncertainty, Plausibility, Knowledge and Interacting Agents,” studies a logic UIALTL , which is a combination of the linear temporal logic LTL, a multi-agent logic with operation for passing knowledge via agents’ interaction, and a suggested logic based on operation of logical uncertainty. Author V. Rybakov describes that the logical operations of UIALTL also include (together with operations from LTL) operations of strong and weak until, agents’ knowledge operations, operation of knowledge via interaction, operation of logical uncertainty, and the operations for environmental and global knowledge. UIALTL is defined as a set of all formulas valid at all Kripke-Hintikka like models NC. Any frame NC represents possible unbounded (in time) computation with multi-processors (parallel computational units) and agents’ channels for connections between computational units. The main aim of the chapter is to determine possible ways for computation logical laws of UIA LTL. Principal problems that are being dealt with are decidability and the satisfiability problems for UIA LTL. The authors find an algorithm which recognizes theorems of UIA LTL (so they show that UIALTL is decidable) and solves satisfiability problem for UIALTL. As an instrument, the authors use reduction of formulas to rules in the reduced normal form and a technique to contract models NC to special non-UIALTL -models, and then, verification of validity these rules in models of bounded size. The chapter uses standard results from non-classical logics based on Kripke-Hintikka models.

Authors Kiran Mishra and R.B. Mishra take to Chapter 4 to discuss intelligent tutoring systems (ITS) in their work “Multiagent Based Selection of Tutor-Subject-Student Paradigm in an Intelligent Tutoring System.” More specifically, they investigate the ITS’ aim at development of two main interconnected modules: pedagogical module and student module .The pedagogical module concerns the design of a teaching strategy that combines the interest of the student, tutor’s capability, and characteristics of subject. Very few effective models have been developed which combine the cognitive, psychological, and behavioral components of tutor, student, and the characteristics of a subject in ITS. Mishra and Mishra have developed a tutor-subject-student (TSS) paradigm for the selection of a tutor for a particular subject. A selection index of a tutor is calculated based upon his performance profile, preference, desire, intention, capability, and trust. An aptitude of a student is determined based upon his answering to the seven types of subject topic categories: analytical, reasoning, descriptive, analytical reasoning, analytical descriptive, reasoning descriptive, and analytical reasoning descriptive. The selection of a tutor is performed for a particular type of topic in the subject on the basis of a student’s aptitude. 

Chapter 5 discusses how studies show that supply chain structure is a key factor affecting information sharing. Yifeng Zhang and Siddhartha Bhattacharyya discuss in their chapter, “Information Sharing Strategies in Business-to-Business E-Hubs: An Agent-Based Study,” how Business-to-Business (B2B) e-hubs have fundamentally changed many companies’ supply chain structure, from a one-to-many to a many-to-many configuration. Traditional supply chains typically center around one company, which interacts with multiple suppliers or customers, forming a one-to-many structure. B2B e-hubs, on the contrary, usually connect many buyers and sellers together, without being dominated by a single company, thus forming a many-to-many configuration. Information sharing in traditional supply chains has been studied extensively, but little attention has been paid to the same in B2B e-hubs. In this study, the authors identified and examined five information sharing strategies in B2B e-hubs. Agent performances under different information sharing strategies were measured and analyzed using an agent-based e-hub model, and practical implications were discussed.

Sam Kin Meng and C. R. Chatwin authored Chapter 6, “Ontology-Based Shopping Agent for E-Marketing.” Meng and Chatwin note that before Internet consumers make buying decisions, several psychological factors come into effect and reflect individual preferences on products. In this chapter, the authors investigate four integrated streams: 1) recognizing the psychological factors that affect Internet consumers, 2) understanding the relationship between businesses’ e-marketing mix and Internet consumers’ psychological factors, 3) designing an ontology mapping businesses’ e-marketing mix with Internet consumers’ decision-making styles, and 4) developing a shopping agent based on the ontology. The relationship between businesses’ e-marketing mix and Internet consumers’ psychological factors is important, because it can identify situations where both businesses and Internet consumers benefit. The authors’ ontology can be used to share Internet consumers’ psychological factors, the e-marketing mix of online business, and their relationships with different computer applications. 

Chapter 7 continues on with “A New Behavior Management Architecture for Language Faculty of an Agent for Task Delegation” by S. Kuppuswami and T. Chithralekha. In this chapter, the authors describe a new architecture for the language faculty of an agent that fulfills the interaction requirements of task delegation. The architecture of the language faculty is based on a conceptualization of the language faculty of an agent and a definition of its internal state paradigm. The new architecture is behavior-management based and possesses self-management properties. This architecture is compared with existing abstract self-management architectures, which examines how the new architecture solves unresolved issues of older models. The architecture description is followed by a case study - Multilingual Natural Language Agent Interface for Mail Service, which illustrates its application.

In the next chapter, authors Hung W. Chu and Minh Q. Huynh examine the effects of Information Systems/Technologies (IS/T) on the performance of firms engaged in growth strategies based on mergers and acquisitions (M&A). Chapter 8, “Effective Use of Information Systems/Technologies in the Mergers and Acquisitions Environment: A Resource-Based Theory Perspective,” discusses the model, derived from a resource-based theory of the firm, developed to predict the influence of IS/T on performance of firms. Data on the financial performance of 133 firms are used to gauge the impact of IS/T on various M&A objectives. The results suggested that IS/Ts implementation of M&A objectives that seek to increase overall efficiency is better than those that seek to introduce new products or efforts to increase sales. Future studies to examine the process of introducing new products from resource-based theory are suggested.

Chapter 9, “Incremental Load in a Data Warehousing Environment,” by Nayem Rahman, discusses that incremental load is an important factor for successful data warehousing. Lack of standardized incremental refresh methodologies can lead to poor analytical results, which can be unacceptable to an organization’s analytical community. Successful data warehouse implementation depends on consistent metadata as well as incremental data load techniques. If consistent load timestamps are maintained and efficient transformation algorithms are used, it is possible to refresh databases with complete accuracy and with little or no manual checking. This chapter proposes an Extract-Transform-Load (ETL) metadata model that archives load observation timestamps and other useful load parameters. Rahman also recommends algorithms and techniques for incremental refreshes that enable table loading while ensuring data consistency, integrity, and improving load performance. In addition to significantly improving quality in incremental load techniques, these methods will save a substantial amount of data warehouse systems resources.

In Chapter 10, Toly Chen talks about how yield forecasting is critical to a semiconductor manufacturing factory in the chapter, “A Fuzzy-Neural Approach with Collaboration Mechanisms for Semiconductor Yield Forecasting.” To further enhance the effectiveness of semiconductor yield forecasting, a fuzzy-neural approach with collaboration mechanisms is proposed in this study. The proposed methodology is modified from Chen and Lin’s approach by incorporating two collaboration mechanisms: favoring mechanism and disfavoring mechanism. The former helps to achieve the consensus among multiple experts to avoid the missing of actual yield, while the latter shrinks the search region to increase the probability of finding out actual yield. To evaluate the effectiveness of the proposed methodology, it was applied to some real cases. According to experimental results, the proposed methodology improved both precision and accuracy of semiconductor yield forecasting by 58% and 35%, respectively.

Ivo José Garcia dos Santos and Edmundo Roberto Mauro Madeira wrote “A Semantic-Enabled Middleware for Citizen-Centric E-Government Services.” This chapter highlights that research efforts toward effective e-government infrastructures have gained momentum, motivated mainly by increasing demands to improve citizen participation in public processes, promote social e-inclusion, and reduce bureaucracy. One of the biggest challenges is providing effective techniques to handle the inherent heterogeneity of the systems and processes involved, making them interoperable. This chapter presents a semantically enriched middleware for citizen-oriented e-government services (CoGPlat), which facilitates the development and operation of new e-government applications with higher levels of dynamism. It introduces the use of composition techniques based on semantic descriptions and ontologies. Requirements like autonomy, privacy, and traceability are handled by applying policies that govern the interactions among services.

Chapter 12, “Comparison of the Hybrid Credit Scoring Models Based on Various Classifiers” by Fei-Long Chen and Feng-Chia, speaks to why credit scoring is an important topic for businesses and socio-economic establishments collecting huge amounts of data, with the intention of making the wrong decision obsolete. In this chapter, the authors propose four approaches that combine four well-known classifiers: K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Back-Propagation Network (BPN), and Extreme Learning Machine (ELM). These classifiers are used to find a suitable hybrid classifier combination featuring selection that retains sufficient information for classification purposes. In this regard, different credit scoring combinations are constructed by selecting features with four approaches and classifiers than would otherwise be chosen. Two credit data sets from the University of California, Irvine (UCI), are chosen to evaluate the accuracy of the various hybrid features selection models. Chen and Feng-Chia describe then evaluate the performances of the procedures that are part of the proposed approaches 

Chapter 13, “Facilitating Decision Making and Maintenance for Power Systems Operators through the Use of Agents and Distributed Embedded Systems,” examines the improvements provided when multimedia information in traditional SCADAS are included in electric facility management and maintenance. Authors A. Carrasco, M. C. Romero-Ternero, F. Sivianes, M. D. Hernández, D. I. Oviedo, and J. Escudero also discuss telecontrol use in the electric sector, with the fundamental objective of providing increased and improved service to the operators who manage these systems. One of the most important contributions is the use of an agent network that is distributed around the electric facility. Through the use of multi-agent technology and its placement in embedded systems, the authors design a system with a degree of intelligence and independence to optimize data collection and provide reaction proposals for the operator. The proposed agent-based architecture is also reviewed in this chapter, as are the design of an example agent and the results obtained in a pilot experience using the proposed hardware platform

Authors Arzu Baloglu, Mudasser F. Wyne, Yilmaz Bahcetepe penned “Web 2.0 Based Intelligent Software Architecture for Photograph Sharing.” They highlight that with the development of Web 2.0 technologies, the sharing of photographs has increased. In this chapter, the authors evaluate the art of photography, analyze how to develop intelligent photograph sharing system, and explain the requirements of such systems. The authors present architecture of an intelligent Web 2.0 based system and in future hope to add more modules for retention of users on the system. The system focuses on Web 2.0 usage, Web mining for personalization service, and brings a different approach to collaborative filtering.

In Chapter 15, “Collusion-Free Privacy Preserving Data Mining,” authors M. Rajalakshmi, T. Purusothaman, and S. Pratheeba discuss how distributed association rule mining is an integral part of data mining that extracts useful information hidden in distributed data sources. As local frequent itemsets are globalized from data sources, sensitive information about individual data sources needs high protection. Different privacy preserving data mining approaches for distributed environment have been proposed but in the existing approaches, collusion among the participating sites reveal sensitive information about the other sites. This chapter proposes a collusion-free algorithm for mining global frequent itemsets in a distributed environment with minimal communication among sites. This algorithm uses the techniques of splitting and sanitizing the itemsets and communicates to random sites in two different phases, thus making it difficult for the colluders to retrieve sensitive information. Results show that the consequence of collusion is reduced to a greater extent without affecting mining performance and confirms optimal communication among sites.

This book concludes with Chapter 15, “Multi-Agent Negotiation in B2C E-Commerce Based on Data Mining Methods” by Bireshwar Dass Mazumdar and R. B. Mishra. The chapter discusses the multi agent system (MAS) model, which has been extensively used in the different tasks of e-commerce like customer relation management (CRM), negotiation, and brokering. For the success of CRM, it is important to target the most profitable customers of a company. This chapter presents a multi-attribute negotiation approach for negotiation between buyer and seller agents. The communication model and the algorithms for various actions involved in the negotiation process re described. The chapter also proposes a multi-attribute based utility model, based on price, response time, and quality. In support of this approach, a prototype system providing negotiation between buyer agents and seller agents is presented.


Dr. Sugumaran’s research has been partly supported by Sogang Business School’s World Class University Program (R31-20002) funded by Korea Research Foundation.

Author(s)/Editor(s) Biography

Vijayan Sugumaran is a Professor of Management Information Systems in the department of Decision and Information Sciences at Oakland University in Rochester, Michigan, as well as a WCU Professor of Service Systems Management and Engineering at Sogang University in South Korea. He received his PhD in Information Technology from George Mason University, located in Fairfax, VA, and his research interests are in the areas of service science, ontologies and Semantic Web, intelligent agent and multi-agent systems, component based software development, and knowledge-based systems. His most recent publications have appeared in Information systems Research, ACM Transactions on Database Systems, IEEE Transactions on Education, IEEE Transactions on Engineering Management, Communications of the ACM, Healthcare Management Science, and Data and Knowledge Engineering. He has published over 150 peer-reviewed articles in journals, conferences, and books. He has edited ten books and two journal special issues, and he is the editor-in-chief of the International Journal of Intelligent Information Technologies and also serves on the editorial board of seven other journals. He was the program co-chair for the 13th International Conference on Applications of Natural Language to Information Systems (NLDB 2008). In addition, he has served as the chair of the Intelligent Agent and Multi-Agent Systems mini-track for Americas Conference on Information Systems (AMCIS 1999 - 2012) and Intelligent Information Systems track for the Information Resources Management Association International Conference (IRMA 2001, 2002, 2005 - 2007). He served as Chair of the E-Commerce track for Decision Science Institute’s Annual Conference, 2004. He was the Information Technology Coordinator for the Decision Sciences Institute (2007-2009). He also regularly serves as a program committee member for numerous national and international conferences.