Methodological Advancements in Intelligent Information Technologies: Evolutionary Trends

Methodological Advancements in Intelligent Information Technologies: Evolutionary Trends

Vijayan Sugumaran (Oakland University, USA)
Release Date: October, 2009|Copyright: © 2010 |Pages: 396
ISBN13: 9781605669700|ISBN10: 1605669709|EISBN13: 9781605669717|DOI: 10.4018/978-1-60566-970-0

Description

Recent advancements in technology have influenced major change within many industries. Because of such advancements, businesses and organizations are adapting to a more technology based work environment in order achieve a higher rate of production, output, and accuracy.

Methodological Advancements in Intelligent Information Technologies: Evolutionary Trends brings together research from international authors detailing developments in intelligent information technologies and their impact on organizational environments. This esteemed reference publication covers topics on agent-based approaches to process management, semantic Web services, data mining techniques, engineering software technologies, and scalable and adaptive Web search engines, in order to provide current research for practitioners, educators, and students interested in the implementation of innovative technologies in a variety of work environments.

Topics Covered

The many academic areas covered in this publication include, but are not limited to:

  • Adaptive Web applications
  • Agent-based approaches to process management
  • Agile workflow technology
  • Bayesian Networks
  • Building search engines
  • E-mail mining
  • Engineering software systems
  • Heterogeneous and semantically-enriched ontologies
  • Quality of service models
  • Semantic Web Services

Reviews and Testimonials

Efficient use of intelligent systems is becoming a necessary goal for all, and an outstanding collection of latest research associated with advancements in intelligent agent applications, semantic technologies, and decision support and modelling is presented in this book. Use of intelligent applications will greatly improve productivity in the social computing arena.

– Vijayan Sugumaran, Oakland University, USA

Table of Contents and List of Contributors

Search this Book:
Reset

Preface

In the highly inter-connected world we live in today, a variety of intelligent information technologies are enabling the exchange of information and knowledge to solve problems. Communication technologies and social computing are emerging as the backbone to gather information and execute various tasks from any place at any time. While it is putting significant pressure on the bandwidth that carries much of this communication, it is also pushing the frontiers of innovation on the capabilities of handheld devices. Continued innovations are needed to ensure that mobile intelligent information technologies can continue to meet our needs in an environment, where our ability to make decisions depends on our ability to access a reliable source of information that is real time „o something that can be facilitated by tapping into the collective wisdom of a network of people in a community and the knowledge coming together on-demand.

Community networks exist in several disciplines; however, they are generally limited in scope. The content and knowledge created in a community network typically stays within its boundary. One of the main reasons why there is not much connectivity or interaction between various community networks is that each community has its own data and knowledge representation and the content repositories are heterogeneous. Moreover, there are no common standards that facilitate data interoperability between the applications in disparate community networks. Several research efforts attempt to fill this gap through the use of semantic technologies. They focus on developing interoperability mechanisms to support knowledge and data exchange through semantic mediation. For each community, a local ontology and metadata can be created that capture the syntactic and semantic aspects of the content that is part of the network. The community network interface will enable users to participate in this metadata and ontology creation process. The ontologies and the metadata from each of the networks can then be integrated to create a global ontology and meta schema. This can be used to provide interoperability between multiple community networks and facilitate broader collaboration.

A community network supports both individual interactions and multiple collaborations within a group. Users can create and contribute content, discover services, use resources, and leverage the power of the community. The community network can provide a range of opportunities for users to access various knowledge repositories, database servers and source documents. The infrastructure provided by the community network enables multiple channels of accessibility and enhanced opportunities for collaboration. One of the goals of this stream of research is to create a community infrastructure with appropriate services, protocols, and collaboration mechanisms that will support information integration and interoperability.

Human Computation refers to the application of human intelligence to solve complex difficult problems that cannot be solved by computers alone. Humans can see patterns and semantics (context, content, and relationships) more quickly, accurately, and meaningfully than machines. Human Computation therefore applies to the problem of annotating, labeling, and classifying voluminous data streams. Of course, the application of autonomous machine intelligence (data mining and machine learning) to the annotation, labeling, and classification of data granules is also valid and efficacious. Machine learning and data mining techniques are needed to cope with the ever-increasing amounts of data being collected by scientific instruments. They are particularly suited to identify near-real-time events and to track the evolution of those events. Thus, a real challenge for scientific communities is the categorization, storage and reuse of very large data sets to produce knowledge. There is a great need for developing services for the semantic annotation of data using human and computer-based techniques.

The best annotation service in the world is useless if the markups (tags) are not scientifically meaningful (i.e., if the tags do not enable data reuse and understanding). Therefore, it is incumbent upon science disciplines and research communities to develop common data models, common terminology, taxonomies, and ontologies. These semantic annotations are often expressed in XML form, either as RDF (Resource Description Framework) triples or in OWL (Web Ontology Language).

Consequently, in order for the data to be reusable, several traditional conditions must be met, except that these must be satisfied now through non-traditional approaches. For example, data reusability typically depends on: (1) data discovery (all relevant data must be found in order for a research project to be meaningful); (2) data understanding (data must be understood in order to be useful); (3) data interoperability (data must work with legacy data and with current data from multiple sources in order to maximize their value); and (4) data integration (data must work with current analysis tools in order to yield results). Non-traditional approaches are needed to meet these conditions as the enormous growth in scientific data volumes render it impractical for humans alone to classify and index the incoming data flood. These new approaches include intelligent techniques such as machine learning, data mining, annotation, informatics, and semantic technologies. To address these needs, one needs to design and implement a semantic annotation service based on current and emerging standards that incorporate tags in loosely-structured folksonomies and ontologies. This could be offered as a service similar to other data services provided by intelligent agent and multiagent systems.

Book Organization

Section I-Intelligent Agent and Multiagent Systems

>

This book is organized in three sections. The first section discusses issues related to intelligent agent and multiagent systems, the second section introduces semantic technologies and their applications and the third section delves into decision support and modeling. In the first section, there are six chapters related to intelligent agents. The first chapter is titled ¡§Engineering Software Systems with Social-Driven Templates,¡¨ by Manuel Kolp, Yves Wautelet, and Sodany Kiv. They contend that Multi-Agent Systems (MAS) architectures are gaining popularity over traditional ones for building open, distributed, and evolving software required by today¡¦s corporate IT applications such as e-business systems, web services or enterprise knowledge bases. Since the fundamental concepts of multi-agent systems are social and intentional rather than object, functional, or implementation-oriented, the design of MAS architectures can be eased by using social patterns. They are detailed agent-oriented design idioms to describe MAS architectures composed of autonomous agents that interact and coordinate to achieve their intentions, like actors in human organizations. This chapter presents social patterns and focuses on a framework aimed to gain insight into these patterns. The framework can be integrated into agent-oriented software engineering methodologies used to build MAS. An overview of the mapping from system architectural design (through organizational architectural styles), to system detailed design (through social patterns), is presented with a data integration case study.

The second chapter is titled ¡§A Multiagent-Based Framework for Integrating Biological Data,¡¨ authored by Faheema Maghrabi, Hossam M. Faheem, Taysir Soliman, and Zaki Taha Fayed. Biological data has been rapidly increasing in volume in different web data sources. To query multiple data sources manually on the internet is time consuming for the biologists. Therefore, systems and tools that facilitate searching multiple biological data sources are needed. Traditional approaches to build distributed or federated systems do not scale well to the large, diverse, and growing number of biological data sources. Internet search engines allow users to search through large numbers of data sources, but provide very limited capabilities for locating, combining, processing, and organizing information. A promising approach to this problem is to provide access to the large number of biological data sources through a multiagent-based framework where a set of agents can cooperate with each other to retrieve relevant information from different biological web databases. The proposed system uses a mediator based integration approach with domain ontology, which uses as a global schema. This chapter proposes a multiagent-based framework that responds to biological queries according to its biological domain ontology.

The third chapter is titled ¡§A Modern Epistemological Reading of Agent Orientation,¡¨ by Yves Wautelet, Christophe Schinckus, and Manuel Kolp. This chapter presents a modern epistemological validation of the process of agent oriented software development. Agent orientation has been widely presented in recent years as a novel modeling, design and programming paradigm for building systems using features such as openness, dynamics, sociality and intentionality. These will be put into perspective through a Lakatosian epistemological approach. The contribution of this chapter is to get researchers acquainted with the epistemological basis of the agent research domain and the context of the emergence of object and agent-orientation. The chapter advocates the use of the Lakatosian research programme concept as an epistemological basis for object and agent orientation. This is done on the basis of how these frameworks operationalize relevant theoretical concepts of the Kuhnian and Lakatosian theories.

The fourth chapter is titled ¡§A Generic Internal State Paradigm for the Language Faculty of Agents for Task Delegation,¡¨ by T. Chithralekha, and S. Kuppuswami. Language ability is an inevitable aspect of delegation in agents to provide for collaborative natural language interaction which is very much essential for delegation. In order that the agent is able to provide its services to users of multiple languages, this collaborative natural language interaction is required to be multilingual. When these two language ability requirements have to be provided in a manner characteristic of agents, it leads to realizing two types of autonomies viz. behavior autonomy to provide collaborative natural language interaction in every language and language ability management autonomy for managing the multiple language competencies. Thus, the language ability of an agent evolves into a language faculty by possessing behavior and behavior management autonomies. The existing paradigms for the internal state of agents are only behavior-oriented and do not suffice to represent the internal state of the language faculty. Hence, in this chapter a new paradigm for the internal state of the language faculty of agents consisting of the belief, task and behavior (BTB) abstractions has been proposed. Its semantics and dynamism have been explained. The application of this paradigm has also been illustrated with examples.

The fifth chapter is titled ¡§An Agent-based Approach to Process Management in E-Learning Environments,¡¨ by Hokyin Lai, Minhong Wang, Jingwen He, and Huaiqing Wang. Learning is a process to acquire new knowledge. Ideally, this process is the result of an active interaction of key cognitive processes, such as perception, imagery, organization, and elaboration. Quality learning has emphasized on designing a course curriculum or learning process, which can elicit the cognitive processing of learners. However, most e-Learning systems nowadays are resources-oriented instead of process-oriented. These systems were designed without adequate support of pedagogical principles to guide the learning process. They have not explained the sequence of how the knowledge was acquired, which, in fact, is extremely important to the quality of learning. This study aims to develop an e-Learning environment that enables students to get engaged in their learning process by guiding and customizing their learning process in an adaptive way. The expected performance of the Agent-based e-Learning Process model is also evaluated by comparing with traditional e-Learning models.

The sixth chapter is titled ¡§Inference Degradation in Information Fusion - A Bayesian Network Case,¡¨ by Xiangyang Li. Bayesian networks have been extensively used in active information fusion that selects the best sensor based on expected utility calculation. However, inference degradation happens when the same sensors are selected repeatedly over time if the applied strategy is not well designed to consider the history of sensor engagement. This phenomenon decreases fusion accuracy and efficiency, in direct conflict to the objective of information integration with multiple sensors. This chapter provides mathematical scrutiny of the inference degradation problem in the popular myopia planning. It examines the generic dynamic Bayesian network models and shows experimentation results for mental state recognition tasks. It also discusses the candidate solutions with initial results. The inference degradation problem is not limited to the discussed fusion tasks and may emerge in variants of sensor planning strategies with more global optimization approach. This study provides common guidelines in information integration applications for information awareness and intelligent decision.

Section II-Semantic Technologies and Applications

The second section contains five chapters dealing with semantic technologies and applications. The

seventh chapter titled ¡§Agent-based Semantic Interoperability of Geo-Services,¡¨ is authored by Iftikhar U. Sikder and Santosh K. Misra. This chapter proposes a multi-agent based framework that allows multiple data sources and models to be semantically integrated for spatial modeling in business processing. The chapter reviews the feasibility of ontology-based spatial resource integration options to combine the core spatial reasoning with domain-specific application models. The authors propose an ontology-based framework for semantic level communication of spatial objects and application models. A multi-agent system (OSIRIS ¡V Ontology-based Spatial Information and Resource Integration Services) is introduced, which semantically interoperates complex spatial services and integrates them in a meaningful composition. The advantage of using multi-agent collaboration in OSIRIS is that it obviates the need for end-user analysts to be able to decompose a problem domain to subproblems or to map different models according to what they actually mean. A multi-agent interaction scenario for collaborative modeling of spatial applications using the proposed custom feature of OSIRIS is illustrated. The framework is then applied in the use case scenario in e-Government by developing a prototype system. The system illustrates an application of domain ontology of urban environmental hydrology and evaluation of decision maker's consequences of land use changes. In e-Government context, the proposed OSIRIS framework works as semantic layer for one stop geospatial portal.

The eighth chapter is titled ¡§EnOntoModel: A Semantically-enriched Model for Ontologies,¡¨ written by Nwe Ni Tun and Satoshi Tojo. Ontologies are intended to facilitate semantic interoperability among distributed and intelligent information systems where diverse software components, computing devices, knowledge, and data, are involved. Since a single global ontology is no longer sufficient to support a variety of tasks performed on differently conceptualized knowledge, ontologies have proliferated in multiple forms of heterogeneity even for the same domain, and such ontologies are called heterogeneous ontologies. For interoperating among information systems through heterogeneous ontologies, an important step in handling semantic heterogeneity should be the attempt to enrich (and clarify) the semantics of concepts in ontologies. In this chapter, the authors propose a conceptual model (called EnOntoModel) of semantically-enriched ontologies by applying three philosophical notions: identity, rigidity, and dependency. As for the advantages of EnOntoModel, the conceptual analysis of enriched ontologies and efficient matching between them are presented.

The ninth chapter is titled ¡§A New Approach for Building a Scalable and Adaptive Vertical Search Engine,¡¨ authored by H. Arafat Ali, Ali I. El Desouky, and Ahmed I. Saleh. Search engines are the most important search tools for finding useful and recent information on the web, which rely on crawlers that continually crawl the web for new pages. They suggest a better solution for general-purpose search engine limitations that lead to a new generation of search engines called vertical-search engines. Searching the web vertically is to divide the web into smaller regions; each region is related to a specific domain. In addition, one crawler is allowed to search in each domain. The innovation of this work is adding intelligence and adaptation ability to focused crawlers. Such added features guide the crawler perfectly to retrieve more relevant pages while crawling the web. The proposed crawler has the ability to estimate the rank of the page before visiting it and adapts itself to any changes in its domain using. It also uses novel techniques for rating the extracted links so that it can decide which page to be visited next with high accuracy. The proposed approach integrates evidence from both content and linkage. It is unique in two aspects. First, it simplifies the focused crawling and improves the crawling performance by maximizing both crawler intelligence and adaptively. Second, it tries to overcome drawbacks of traditional crawling approaches by combining a number of different disciplines like; information retrieval, machine learning, link context and linking structure of the web. Hence, it can simply achieve; tunneling, better accuracy, simple implementation, and self-dependency. Experimental results have shown that the proposed strategy demonstrates significant performance improvement over traditional crawling techniques.

The tenth chapter titled ¡§Information Customization using SOMSE: A Self-Organizing Map Based Approach,¡¨ is written by Mohamed Salah Hamdi. Conventional Web search engines return long lists of ranked documents that users are forced to sift through to find relevant documents. The notoriously low precision of Web search engines coupled with the ranked list presentation make it hard for users to find the information they are looking for. One of the fundamental issues of information retrieval is searching for compromises between precision and recall. It is generally desirable to have high precision and high recall, although in reality, increasing precision often means decreasing recall and vice versa. Developing retrieval techniques that will yield high recall and high precision is desirable. Unfortunately, such techniques would impose additional resource demands on the search engines. Search engines are under severe resource constraints and dedicating enough CPU time to each query might not be feasible. A more productive approach, however, seems to enhance post-processing of the retrieved set, such as providing links and semantic maps to retrieved results of a query. If such value-adding processes allow the user to easily identify relevant documents from a large retrieved set, queries that produce low precision/high recall results will become more acceptable. This chapter attempts to improve the quality of Web search by combining meta-search and self-organizing maps. This can help users both in locating interesting documents more easily and in getting an overview of the retrieved document set.

The eleventh chapter is titled ¡§Mining E-Mail Messages: Uncovering Interaction Patterns and Processes using E-mail Logs,¡¨ by Wil M.P. van der Aalst and Andriy Nikolov. Increasingly information systems log historic information in a systematic way. Workflow management systems, as well as ERP, CRM, SCM, and B2B systems often provide a so-called ``event log'', i.e., a log recording the execution of activities. Thus far, process mining has been mainly focusing on structured event logs resulting in powerful analysis techniques and tools for discovering process, control, data, organizational, and social structures from event logs. Unfortunately, many work processes are not supported by systems providing structured logs. Instead very basic tools such as text editors, spreadsheets, and e-mail are used. This chapter explores the application of process mining to e-mail, i.e., unstructured or semi-structured e-mail messages are converted into event logs suitable for application of process mining tools. This chapter presents the tool EMailAnalyzer, embedded in the ProM process mining framework, which analyzes and transforms e-mail messages to a format that allows for analysis using process mining techniques. The main innovative aspect of this work is that, unlike most other work in this area, the analysis is not restricted to social network analysis. Based on e-mail logs the proposed approach can also discover interaction patterns and processes.

.

Section III-Decision Support and Modeling

>

The third section of the book deals with decision support and modeling and contains five chapters. The twelfth chapter titled ¡§Extended Enterprise and Semantic Contract Monitoring and Execution DSS Architecture,¡¨ by A. F. Salam is motivated by the critical problem of stark incompatibility between the contractual clauses (typically buried in legal documents) and the myriad of performance measures used to evaluate and reward (or penalize) supply participants in the extended enterprise. This difference between what is contractually expected and what is actually performed in addition to the lack of transparency of what is measured and how those measures relate to the contractual obligations make it difficult, error prone and confusing for different partner organizations. To address this critical issue, this chapter presents a supplier performance contract monitoring and execution decision support architecture and its prototype implementation using a business case study. This work uses the SWRL extension of OWL-DL to represent contract conditions and rules as part of the ontology and then uses the Jess Rule Reasoner to execute the contract rules integrating with Service Oriented Computing to provide decision support to managers in the extended enterprise.

Chapter thirteen titled ¡§Supporting structured group decision making through system-directed user guidance: an experimental study,¡¨ is contributed by Harold J. Lagroue III. This chapter addresses an area which holds considerable promise for enhancing the effective utilization of advanced information technologies: the feasibility of using system-directed multi-modal user support for facilitating users of advanced information technologies. An application for automating the information technology facilitation process is used to compare group decision-making effectiveness of human-facilitated groups with groups using virtual facilitation in an experiment employing auditors, accountants, and IT security professionals as participants. The results of the experiment are presented and possible avenues for future research studies are suggested.

The fourteenth chapter is titled ¡§Agile Workflow Technology for Long-Term Processes - Enhanced by Case-Based Change Reuse,¡¨ presented by Mirjam Minor, Alexander Tartakovski, Daniel Schmalen, and Ralph Bergmann. The increasing dynamics of today¡¦s work impacts the business processes. Agile workflow technology is a means for the automation of adaptable processes. However, the modification of workflows is a difficult task that is performed by human experts. This chapter discusses the novel approach of agile workflow technology for dynamic, long-term scenarios and on change reuse. First, it introduces new concepts for a workflow modelling language and enactment service, which enable an interlocked modelling and execution of workflows by means of a sophisticated suspension mechanism. Second, it provides new process-oriented methods of case-based reasoning in order to support the reuse of change experience. The results from an experimental evaluation in a real-world scenario highlight the usefulness and the practical impact of this work.

Raoudha Ben Djemaa, Ikram Amous, and Abdelmajid Ben Hamadou have contributed chapter fifteen titled ¡§Extending a conceptual modeling language for adaptive web applications.¡¨ The complexity of Adaptive Web Applications (AWA) is increasing almost every day. Besides impacting the implementation phase, this complexity must also be suitably managed while modeling the application. In fact, personalization is a critical aspect in many popular domains such as e-commerce. It is so important that it should be dealt with through a design view, rather than only an implementation view (which discusses mechanisms, rather than design options). To this end, this chapter proposes an approach for AWA called GIWA based on WA-UML (Web Adaptive Unified Modeling Language). In fact, the acceptance of UML as a standard for the design of object-oriented systems, together with the explosive growth of the World Wide Web have raised the need for UML extensions to model hypermedia applications running on the Internet. GIWA¡¥s objective is to facilitate the automatic execution of the design and the automatic generation of adaptable web interface. The GIWA methodology is based on different steps: requirement analysis, conceptual design, adaptation design and generation. Using GIWA, designers can specify, at the requirement analysis stage, the features of web application to be generated. These features are represented, at the conceptual level using WA-UML; an UML extension for Adaptive Web Applications. It increases the expressivity of UML while adding labels and graphic annotations to UML diagrams. This extension defines a set of stereotypes and constraints, which facilitates the modeling of AWA.

The sixteenth chapter titled ¡§A QoS Aware, Cognitive Parameters based Model for the Selection of Semantic Web Services¡¨ is written by Sandeep Kumar and Kuldeep Kumar. One of the most important aspects of semantic web service composition process is the selection of most appropriate semantic web service. The Quality of Service (QoS) and cognitive parameters can be a good basis for this selection process. This chapter presents a hybrid selection model for the selection of semantic web services based on their QoS and cognitive parameters. The presented model provides a new approach of measuring the QoS parameters in an accurate way and provides a completely novel and formalized measurement of different cognitive parameters.

Considerable advancements are being made in intelligent information technologies and novel methodologies and applications are emerging as these technologies mature. Efficient use of intelligent systems is becoming a necessary goal for all, and an outstanding collection of latest research associated with advancements in intelligent agent applications, semantic technologies, and decision support and modelling is presented in this book. Use of intelligent applications will greatly improve productivity in the social computing arena.

Vijayan Sugumaran

Editor-in-Chief

Author(s)/Editor(s) Biography

Vijayan Sugumaran is a Professor of Management Information Systems in the department of Decision and Information Sciences at Oakland University in Rochester, Michigan, as well as a WCU Professor of Service Systems Management and Engineering at Sogang University in South Korea. He received his PhD in Information Technology from George Mason University, located in Fairfax, VA, and his research interests are in the areas of service science, ontologies and Semantic Web, intelligent agent and multi-agent systems, component based software development, and knowledge-based systems. His most recent publications have appeared in Information systems Research, ACM Transactions on Database Systems, IEEE Transactions on Education, IEEE Transactions on Engineering Management, Communications of the ACM, Healthcare Management Science, and Data and Knowledge Engineering. He has published over 150 peer-reviewed articles in journals, conferences, and books. He has edited ten books and two journal special issues, and he is the editor-in-chief of the International Journal of Intelligent Information Technologies and also serves on the editorial board of seven other journals. He was the program co-chair for the 13th International Conference on Applications of Natural Language to Information Systems (NLDB 2008). In addition, he has served as the chair of the Intelligent Agent and Multi-Agent Systems mini-track for Americas Conference on Information Systems (AMCIS 1999 - 2012) and Intelligent Information Systems track for the Information Resources Management Association International Conference (IRMA 2001, 2002, 2005 - 2007). He served as Chair of the E-Commerce track for Decision Science Institute’s Annual Conference, 2004. He was the Information Technology Coordinator for the Decision Sciences Institute (2007-2009). He also regularly serves as a program committee member for numerous national and international conferences.