In this global economy, companies are increasingly using intelligent and adaptive technologies to improve their efficiency, deliver high quality products quicker and cheaper to the market place, innovate, and gain competitive advantage. With the advent of the Web and other enabling technologies, there is a growing trend in the convergence of information and communication technologies in a number of sectors. Broadly speaking, the notion of technology convergence refers to the synergistic amalgamation of conventional industry, IT (Information Technology), and communication technology. With respect to the development of novel and innovative applications, cutting-edge technologies are used to create new products that can improve the quality of life. Thus, Intelligent, Adaptive and Reasoning Technologies (IART) that facilitate product innovation and efficiency are integrated with traditional approaches. This convergence, referred to as IART convergence, results in businesses being more agile and nimble and be able to respond to changes in the market place by offering new products and services that will meet user needs.
One major area of application of IART convergence is the area of Knowledge Management (KM) and Business Intelligence (BI). These domains have been active areas of research within the Information Systems (IS) community. Prior research on knowledge management has primarily focused on factors that impact the creation, capture, transfer, and use of knowledge in organizations as well as different forms of knowledge and their usage. While there is some initial success, knowledge management has not been widely adopted and practiced due to lack of appropriate tools and technologies to foster knowledge creation, reuse, and exchange within and among different communities. Similarly, business intelligence systems have been hampered with numerous technical and managerial challenges. Lack of data standardization and semantics has affected the interoperability of data between various data sources and systems. With the recent developments in semantic technologies and services and the IART convergence, there is renewed interest in KM and BI. The underlying goal of semantics and services based approach to KM and BI is to foster knowledge and data exchange through mediation. For example, each community within a network can create a local ontology and metadata that captures the syntactic and semantic aspects of data and knowledge relevant to this community. These ontologies and the metadata from multiple communities can then be integrated to create a global ontology and meta-schema. This is used to provide interoperability between the multiple community networks and facilitate broader collaboration.
With the advent of Semantic Web and service orientation, there is considerable research underway in semantic technologies and service oriented architectures for developing intelligent systems and applications that can understand the semantics of the content in Web resources. They help manage, integrate, and analyze data, i.e., individual information elements within a single document or from multiple data sources. Semantic technologies help add meaning to characterize the content of resources. This facilitates the representation of Web content that enables applications to access information autonomously using common search terms. A fundamental building block in the semantic technology is an ontology. Ontologies express basic concepts and the relationship between concepts that exist in a domain. They form the backbone for the Semantic Web and are used to reason about entities in a particular domain as well as knowledge sharing and reuse. Ontologies can be used to specify complex constraints on the types of resources and their properties. OWL (Web Ontology Language) is the most popular ontology language used by applications for processing content from a resource without human intervention. Thus, it facilitates machine interoperability by providing the necessary vocabulary along with formal semantics.
Insofar as reasoning is concerned, rule languages are used to write inferencing rules in a standard way that can be used for automated reasoning in a particular domain. A rule language provides kernel specification for rule structures that can be used for rule interchange. This facilitates rule reuse and extension. Querying Web content and automatically retrieving relevant segments from a resource by an application is the driving force behind Web query languages. They provide a protocol for querying RDF graphs via pattern matching. They support basic conjunctive patterns, value filters, optional patterns, and pattern disjunction. Logic and reasoning are also an integral part of the Semantics Web. A reasoning system can use one or more ontologies and make new inferences based on the content of a particular resource. It also helps identify appropriate resources that meet a particular requirement. Thus, the reasoning system enables applications to extract appropriate information from various resources. Logic provides the theoretical underpinning required for reasoning and deduction. First order logic, Description Logic, et cetera, are commonly used to support reasoning.
Ontologies and reasoning mechanisms as well as IART convergence can have a direct impact on the design and deployment of knowledge management and business intelligence systems. The convergence of these intelligent technologies facilitates the easy access and processing of large amount of data and information to generate actionable “intelligence” that can be used by humans and applications to gain competitive advantage. It also improves the efficiency of knowledge creation and dissemination across different groups and domains. Issues related to interoperability of knowledge sources are minimized with the use of ontology-based mediation. The objective of this book is to encourage and renew research on IART convergence with respect to KM and BI technologies and systems with a fresh perspective of semantics and services. For example, using the service oriented paradigm, one can create data and knowledge services that can promote adaptability, reusability, and interoperability of data and knowledge from a variety of sources. The purpose of this book is to provide a forum for academics and practitioners to identify and explore the issues, opportunities, and solutions that improve knowledge management and provide business intelligence, particularly from the view point of semantics, services, and IART convergence. In other words, in the era of Service Oriented Architecture (SOA), Web Services, and Semantic Web where adaptivness, reuse, and interoperability are paramount, how can IART based KM and BI be taken to the next level using these technologies to achieve large scale adoption and usage?
This book is organized into three sections. The first section discusses semantic technologies and their applications as well as automated reasoning for Semantic Web services composition and knowledge management. The second section discusses issues related to intelligent agent and multi-agent systems and their use. The third section delves into intelligent technologies and their applications related to business intelligence, intelligent search engines, trust in virtual organizations, etc.
Section I – Semantics and Reasoning
In the first section, there are six chapters related to semantics and reasoning. The first chapter is titled “Improving Domain Searches Through Customized Search Engines,” contributed by Cecil Chua, Roger Chiang, and Veda Storey. They emphasize that search engines are essential, ubiquitous tools for seeking information from the Internet. Prior research has also demonstrated that combining features of separate search engines often improves retrieval performance. However, such feature combination is often difficult, because developers don’t consider other developers when building their software. To facilitate the development of search engines, they propose a customized search engine approach to integrating appropriate components from multiple search engines. This chapter presents an interoperability architecture for building customized search engines. To achieve this, the authors analyze existing search engines and decompose them into self-contained components that are classified into six categories. They have developed a prototype called Automated Software Development Environment for Information Retrieval (ASDEIR), which incorporates intelligent features that detect and attempt to resolve conflicts between components.
The second chapter is titled “A Framework to Analyze User Interactions in an E-Commerce Environment,” written by Manoj Thomas, Richard Redmond, and Victoria Yoon. As e-commerce applications proliferates the Web, users are often overwhelmed by the task of sifting through the copious volumes of information. Since the nature of foraging for information in such digital spaces can be characterized as the interaction between internal task representation and the external problem domain, the authors look at how expert systems can be used to reduce complexity of the task. They describe a conceptual framework to analyze user interactions based on mental representations. They also detail an expert system implementation using the ontology language OWL to express the semantics of the representations and the rule language SWRL to define the rule base for contextual reasoning. This chapter illustrates how an expert system can be used to guide users in an e-commerce setting by orchestrating a cognitive fit between the task environment and the task solution.
The third chapter is titled “Semantic Web Services Composition with Case Based Reasoning,” by Taha Osman, Dhavalkumar Thakker, and David Al-Dabass. With the rapid proliferation of Web services as the medium of choice to securely publish application services beyond the firewall, the importance of accurate, yet flexible matchmaking of similar services gains importance both for the human user and for dynamic composition engines. In this chapter, the authors present a novel approach that utilizes the case based reasoning methodology for modeling dynamic Web service discovery and matchmaking, and investigates the use of case adaptation for service composition. Their framework considers Web services execution experiences in the decision making process and is highly adaptable to the service requester constraints. This framework also utilizes OWL semantic descriptions extensively for implementing both the components of the CBR engine and the matchmaking profile of the Web services.
The fourth chapter is titled “Semiotic Evaluation of Product Ontologies,” authored by Joerg Leukel and Vijayan Sugumaran. In recent years, product ontology has been proposed for solving integration problems in product-related Information Systems such as e-commerce and supply chain management applications. A product ontology provides consensual definitions of concepts and inter-relationships being relevant in a product domain of interest. Adopting such an ontology requires means for assessing their suitability and selecting the “right” product ontology. In this chapter, the authors (1) propose a metrics suite for product ontology evaluation based on semiotic theory, and (2) demonstrate the feasibility and usefulness of the metrics suite using a supply chain model. The contribution of this chapter is the comprehensive metrics suite that takes into account the various quality dimensions of product ontology.
The fifth chapter is titled “Discovery Process in a B2B eMarketplace: A Semantic Matchmaking Approach,” by Fergle D’Aubeterre, Lakshmi Iyer, Richard Ehrhardt, and Rahul Singh. In the context of a customer-oriented value chain, companies must effectively address customers’ changing information needs during the process of acquiring a product or service to remain competitive. The ultimate goal of semantic matchmaking is to identify the best resources (supply) that fully meet the requirements (demand); however, such a goal is very difficult to achieve due to information distributed over disparate systems. To alleviate this problem in the context of eMarketplaces, the authors suggest an agent-enabled infomediary-based eMarketplace that enables semantic matchmaking. Specifically, the authors show how multi-criteria decision making techniques can be utilized to rank matches. They describe mechanisms for knowledge representation and exchange to allow partner organizations to seamlessly share information and knowledge to facilitate the discovery process in an eMarketplace context.
The sixth chapter is titled “Organizational Semiotics Complements Knowledge Management: Two Steps to Knowledge Management Improvement,” by Jeffrey Schiffel. The semantic normal forms of organizational semiotics extract structures from natural language texts that may be stored electronically. In themselves, the SNFs are only canonic descriptions of the patterns of behavior observed in a culture. Conceptual graphs and dataflow graphs, their dynamic variety, provide means to reason over propositions in first order logics. Conceptual graphs, however, do not capture the ontological entities needed for such reasoning. The culture of an organization contains natural language entities that can be extracted for use in knowledge representation and reasoning. Together in a rigorous, two-step process, ontology charting from organizational semiotics and dataflow graphs from knowledge engineering provide a means to extract entities of interest from a subject domain such as the culture of organizations and then to represent these entities in formal logic reasoning. This chapter presents this process, and concludes with an example of how process improvement in an IT organization may be measured in this two-step process.
Section II – Agent-Based Systems
The second section contains five chapters dealing with intelligent agent and multi-agent systems and their applications. The seventh chapter is titled “Negotiation Behaviors in Agent-Based Negotiation Support Systems,” by Manish Agrawal and Kaushal Chari. Prior research on negotiation support systems (NSS) has paid limited attention to the information content in the observed bid sequences of negotiators as well as on the cognitive limitations of individual negotiators and their impacts on negotiation performance. In this chapter, the authors assess the performance of human subjects in the context of agent-based NSS, and the accuracy of an exponential functional form in representing observed human bid sequences. They then predict the reservation values of negotiators based on their observed bids. Finally, they discuss the impact of negotiation support systems in helping users realize superior negotiation outcomes. Their results indicate that an exponential function is a good model for observed bids.
The eighth chapter is titled “Agents, Availability Awareness, and Decision Making,” by Stephen Russell and Victoria Yoon. Despite the importance of resource availability, the inclusion of availability awareness in current agent-based systems is limited, particularly in decision support settings. This chapter discusses issues related to availability awareness in agent-based systems and proposes that knowledge of resources’ online status and readiness in these systems can improve decision outcomes. A conceptual model for incorporating availability and presence awareness in an agent-based system is presented, and an implementation framework operationalizing the conceptual model using JADE is proposed. Finally, the framework is developed as an agent-based decision support system (DSS) and evaluated in a decision making simulation.
The ninth chapter is titled “Evaluation of Fault Tolerant Mobile Agents in Distributed Systems,” by Hojatollah Hamidi and Abbas Vafaei. The reliable execution of a mobile agent is a very important design issue to build a mobile agent system and many fault-tolerant schemes have been proposed. This chapter presents an evaluation of the performance of the fault-tolerant schemes for the mobile agent environment. This evaluation focuses on the check-pointing schemes and deals with the cooperating agents. The authors propose a Fault-Tolerant approach for Mobile Agents (FANTOMAS) design which offers a user transparent fault tolerance that can be activated on demand, according to the needs of the task. This chapter also discusses how a transactional agent with different types of commitment constraints can commit. Furthermore, this chapter proposes a solution for effective agent deployment using dynamic agent domains.
The tenth chapter is titled “Cognitive Parameter Based Agent Selection and Negotiation Process for B2C E-Commerce,” by Bireshwar Mazumdar and R. B. Mishra. Multi-agent paradigms have been developed for negotiation and brokering in B2C e-commerce. Few of the models consider the mental states and social settings (trust and reputation), but no model depicts their combination. This chapter presents three mathematical models. First, the cognitive computational model, a combined model of belief, desire, and intention (BDI) for agents’ mental attitudes and social settings, is discussed. This is used for the computation of trust and then the index of negotiation, which is based on trust and reputation. The second computation model is for the computation of business index that characterizes the parameters of some of the business processes, which match the buyer’s satisfaction level. The third computation model of utility is used for negotiation between the seller and buyer to achieve maximum combined utility increment (CUI), which is the difference between the marginal utility gain (MUG) of a buyer and the marginal utility cost (MUC) of a seller.
The eleventh chapter is titled “User Perceptions and Employment of Interface Agents for Email Notification: An Inductive Approach,” by Alexander Serenko. This chapter investigates user perceptions and employment of interface agents for email notification to answer three research questions pertaining to user demographics, typical usage, and perceptions of this technology. A survey instrument was administered to 75 email interface agent users. Current email interface agent users are predominantly male, well-educated, and well-off innovative individuals who are employed in the IS/IT sector, utilize email heavily, and reside in an English-speaking country. They use agents to announce incoming messages and calendar reminders. The key factors why they like to use agents are perceived usefulness, enjoyment, ease of use, attractiveness, social image, an agent’s reliability, and personalization. The major factors why they dislike doing so are perceived intrusiveness of an agent, agent-system interference, and incompatibility. Users envision ”ideal email notification agents” as highly intelligent applications delivering messages in a non-intrusive yet persistent manner. A model of agent acceptance and use is discussed in this chapter.
Section III – Intelligent Technologies
The third section of the book deals with intelligent technologies and contains five chapters. The twelfth chapter is titled “Traffic Responsive Signal Timing Plan Generation Based on Neural Network,” by Azzam ul-Asar, M. Ullah, Mudasser Wyne, Jamal Ahmed, and Riaz ul-Hasnain. This chapter proposes a neural network based traffic signal controller, which eliminates most of the problems associated with the Traffic Responsive Plan Selection (TRPS) mode of the closed loop system. Instead of storing timing plans for different traffic scenarios, which requires clustering and threshold calculations, the proposed approach uses an Artificial Neural Network (ANN) model that produces optimal plans based on optimized weights obtained through its learning phase. Clustering in a closed loop system is the root of the problems, and therefore, it has been eliminated in the proposed approach. The Particle Swarm Optimization (PSO) technique has been used both in the learning rule of ANN as well as generating training cases for ANN in terms of optimized timing plans, based on Highway Capacity Manual (HCM) delay for all traffic demands found in historical data. The ANN generates optimal plans online to address real time traffic demands and thus is more responsive to varying traffic conditions.
The thirteenth chapter is titled “Intelligent Information Integration: Reclaiming the Intelligence,” by Naveen Ashish and David Maluf. The authors present their work in the conceptualization, design, implementation, and application of “lean” information integration systems. They present a new data integration approach based on a schema-less data management and integration paradigm, which enables developing cost-effective, large scale integration applications. They have designed and developed a highly scalable, information-on-demand system called NETMARK, which facilitates information access and integration based on a theory of articulation management and a context sensitive paradigm. NETMARK has been widely deployed for managing, storing, and searching unstructured or semi-structured arbitrary XML and HTML information at the National Aeronautics Space Administration (NASA). In this chapter, the authors describe the theory, design, and implementation of their system, present experimental benchmark evaluations, and validate their approach through real-world applications in the NASA enterprise.
The fourteenth chapter is titled “Association Analysis of Alumni Giving: A Formal Concept Analysis,” by Ray Hashemi, Louis Le Blanc, Azita Bahrami, Mahmood Bahar, and Bryan Traywick. They have analyzed a large sample of university alumni giving records for a public university in the southwestern United States using Formal Concept Analysis (FCA). This represents the initial attempt to perform analysis of such data by means of a machine learning technique. The variables employed include the gift amount to the university foundation as well as traditional demographic variables such as year of graduation, gender, ethnicity, marital status, et cetera. The foundation serves as one of the institution’s non-profit, fund-raising organizations. It pursues substantial gifts that are designated for the educational or leadership programs of the giver’s choice. Although they process gifts of all sizes, the foundation’s focus is on major gifts and endowments. Association Analysis of the given dataset is a two-step process. In the first step, FCA is applied to identify concepts and their relationships and in the second step, the association rules are defined for each concept. The hypothesis examined in this chapter is that the generosity of alumni toward his/her alma mater can be predicted using association rules obtained by applying the Formal Concept Analysis approach.
The fifteenth chapter is titled “KStore: A Dynamic Meta-Knowledge Repository for Intelligent BI,” by Jane Campbell Mazzagatti. KStore is a computer data structure based on the Phaneron of C. S. Peirce (Peirce, 1931-1958). This structure, called a Knowledge Store, KStore or simply K, is currently being developed as a storage engine to support BI data queries and analysis. The first Ks being constructed handle nominal data and record sequences of field/record data variables and their relationships. These rudimentary Ks are dynamic, allowing real-time data processing, ad hoc queries, and data compression to facilitate data mining. This chapter describes a next step in the development of the K structure, to record into the K structure, meta data associated with the field/record data, in particular the column or dimension names, and a source indicator.
The sixteenth chapter is titled “A Transaction-Oriented Architecture for Structuring Unstructured Information in Enterprise Applications,” by Simon Polovina and Richard Hill. It is known that 80-85% of all corporate information remains unstructured. As such, many enterprises rely on Information Systems that cause them to risk transactions that are based on lack of information (errors of omission) or misleading information (errors of commission). To address this concern, the fundamental business concept of monetary transactions is extended to include qualitative business concepts. A Transaction Model (TM) is accordingly identified that provides a structure for these unstructured but vital aspects of business transactions. By highlighting how unstructured information can be integrated into transactions, the TM provides businesses with a much more balanced view of the transactions they engage in or to discover novel transactions that they might have otherwise missed. A simple example is provided that illustrates this integration and reveals a key missing element. This discovery points to a transactions pattern that can be used to ensure that all the parties (or agents) in a transaction are identified, as well as capturing unstructured and structured information into a coherent framework. In support of the TM as a pattern, more examples of its use in a variety of domains are given. A number of enterprise applications are suggested such as in multi-agent systems, document text capture, and knowledge management.
The seventeenth chapter is titled “Virtual Organizational Trust Requirements: Can Semiotics Help Fill the Trust Gap?” by Tim French. It is suggested that the use of the semiotic ladder, together with a supportive trust agent can be used together to better explicate “soft” trust issues in the context of Grid services. The contribution offered here is intended to fill the gap in current understanding and modeling of such issues and to support Grid service designers to better conceptualize, hence manage trust issues. The semiotic paradigm is intended to offer an integrative viewpoint within which to explicate “soft” trust issues throughout the Grid life-cycle. A computationally lightweight trust agent is described that can be used to verify high level trust of a Virtual Organization. The potential benefits of the approach that is advocated here include the reduction of risk and potential improvements in the quality and reliability of Grid service partnerships. For these benefits to accrue, explicit “soft” as well as “hard” trust management is essential as is an integrative viewpoint.
Considerable advancements are being made in IART convergence, and novel approaches and applications are emerging as a result of this convergence in different domains. Efficient use of intelligent systems is becoming a necessary goal for all, and an outstanding collection of latest research associated with advancements in intelligent, adaptive, and reasoning technologies is presented in this book. Use of intelligent applications in the context of IART convergence will greatly improve efficiency, effectiveness, and productivity in a variety of domains including healthcare, agriculture, fisheries, manufacturing, and telecommunication.
Intelligent, Adaptive and Reasoning Technologies: New Developments and Applications
Dr. Sugumaran’s research has been partly supported by Sogang Business School’s World Class University Program (R31-20002) funded by Korea Research Foundation.