Yingxu Wang (University of Calgary, Canada)

Release Date: November, 2009|Copyright: © 2010
|Pages: 606

ISBN13: 9781605669021|ISBN10: 1605669024|EISBN13: 9781605669038|DOI: 10.4018/978-1-60566-902-1

Cognitive informatics is a multidisciplinary field that acts as the bridge between natural science and information science. Specifically, it investigates the potential applications of information processing and natural intelligence to science and engineering disciplines.

This collection, entitled **Discoveries and Breakthroughs in Cognitive Informatics and Natural Intelligence**, provides emerging research topics in cognitive informatics research with a focus on such topics as reducing cognitive overload, real-time process algebra, and neural networks for iris recognition, emotion recognition in speech, and the classification of musical chords.

The many academic areas covered in this publication include, but are not limited to:

- 3D object classication
- Classification of musical chords
- Computational brain analysis
- Emotion recognition in speech
- Granular rough computing
- Interactive data-mining
- Iris recognition analysis
- Neural networks for iris recognition
- Real-time process algebra
- Reducing cognitive overload

...examines innovative research in the emerging, multidisciplinary field of cognitive informatics. Researchers, practitioners and students can benefit from discussions of the connections between natural science and informatics that are investigated in this fundamental collection of cognitive informatics research...

Search this Book:

Reset

Chapter 1, “A Computational Cognitive Model of the Brain” by Zhiwei Shi, Hong Hu, and Zhongzhi Shi, proposes a computational cognitive model based on a graphical model that they carried out before. The model possesses many attractive properties, including distinctive knowledge representation, the capability of knowledge accumulation, active (top-down) attention, subjective similarity measurement, and attention-guided disambiguation. It also has “consciousness” and can even “think” and “make inference.” To some extent, it works just like the human brain does. The experimental evidence demonstrates that it can give reasonable computational explanation on the human phenomenon of forgetting. Although there are still some undetermined details and neurobiological mechanisms deserving consideration, this work presents a meaningful attempt to give further insights into the brain’s functions.

Chapter 2, “A Cognitive Approach to the Mechanism of Intelligence,” by Yi X. Zhong, explains a new approach to the intelligence research, namely the cognitive approach that tries to explore in depth the core mechanism of intelligence formation of intelligent systems from the cognitive viewpoint. It is discovered, as result, that the mechanism of intelligence formation in general case is implemented by a sequence of transformations conversing the information to knowledge and further to intelligence (i.e., the intelligent strategy, the embodiment of intelligence in a narrower sense). It is also discovered that the three major approaches to AI that exist, the structural simulation approach, the functional simulation approach, and the behavior simulation approach, can all be harmoniously unified within the framework of the cognitive approach. These two discoveries, as well as the related background, are reported in this chapter.

Chapter 3, “Reducing Cognitive Overload by Meta-Learning Assisted Algorithm Selection” by Lisa Fan and Minxiao Lei presents a meta-learning approach to support users automatically selecting most suitable algorithms during data mining model building process. The authors discuss the meta-learning method in detail and present some empirical results that show the improvement that can be achieved with the hybrid model by combining meta-learning method and Rough Set feature reduction. The redundant properties of the dataset can be found. Thus, the ranking process can be sped up and accuracy can be increased by using the reduct of the properties of the dataset. With the reduced searching space, users’ cognitive load is reduced.

Chapter 4, “Analyzing Learning Methods in a Functional Environment” by Alberto de la Encina, Mercedes Hidalgo-Herrero, Pablo Rabanal, Ismael Rodríguez, and Fernando Rubio, presents a programming environment to help studying the behavior of cognitive models. This environment allows to easily define new cognitive processes, it simplifies the methods to interconnect them, and it also provides graphical information to analyze how a complex cognitive system is evolving. Moreover, it also includes observation facilities, so that the user can analyze the internal behavior of each of the cognitive entities appearing in the system. The authors illustrate the usefulness of their system by using several examples within the chapter.

Chapter 5, “Humans and Machines: Nature of Learning and Learning of Nature” by Hélène Hagège, Christopher Dartnell, Éric Martin, and Jean Sallantin, relates the lessons from philosophy, psychology, didactics and ethics to their work in computational scientific discovery that aims at empowering learning machines with the task of assisting human researchers (Dartnell, Martin, Hagège, & Sallantin, 2008). The chapter concludes with didactical and ethical considerations

Chapter 6, “On Cognitive Properties of Human Factors and Error Models in Engineering and Socialization” by Yingxu Wang, explores the cognitive foundations of human traits and cognitive properties of human factors in engineering. A comprehensive set of fundamental traits of human beings are identified, and the hierarchical model of basic human needs is formally described. The characteristics of human factors and their influences in engineering organizations and socialization are explored. Based on the models of basic human traits, needs, and their influences, driving forces behind the human factors in engineering and society are revealed. A formal model of human errors in task performance is derived, and case studies of the error model in software engineering are presented.

Chapter 7, “User-Centered Interactive Data Mining” by Yan Zhao and Yiyu Yao is guided by the concept that while many data mining models concentrate on automation and efficiency, interactive data mining models focus on adaptive and effective communications between human users and computer systems. User requirements and preferences play an important role in human-machine interactions, and guide the selection of knowledge representations, knowledge discovery operations and measurements, combined with explanations of mined patterns. This chapter discusses these fundamental issues based on a user-centered three-layer framework of interactive data mining.

Chapter 8, “On Concept Algebra: A Denotational Mathematical Structure for Knowledge and Software Modeling” by Yingxu Wang presents a formal theory for abstract concepts and knowledge manipulation known as “concept algebra.” The mathematical models of concepts and knowledge are developed based on the object-attribute-relation (OAR) theory. The formal methodology for manipulating knowledge as a concept network is described. Case studies demonstrate that concept algebra provides a generic and formal knowledge manipulation means, which is capable to deal with complex knowledge and software structures as well as their algebraic operations.

Chapter 9, “On System Algebra: A Denotational Mathematical Structure for Abstract System Modeling” by Yingxu Wang presents a mathematical theory of system algebra and its applications in cognitive informatics, system engineering, software engineering, and cognitive informatics. A rigorous treatment of abstract systems is described, and the algebraic relations and compositional operations of abstract systems are analyzed. System algebra provides a denotational mathematical means that can be used to model, specify, and manipulate generic “to be” and “to have” type problems, particularly system architectures and high-level system designs, in computing, software engineering, system engineering, and cognitive informatics.

Chapter 10, “RTPA: A Denotational Mathematics for Manipulating Intelligent and Computational Behaviors” by Yingxu Wang, discusses real-time process algebra (RTPA), a denotational mathematical structure for denoting and manipulating system behavioral processes. RTPA is designed as a coherent algebraic system for intelligent and software system modeling, specification, refinement, and implementation. RTPA encompasses 17 metaprocesses and 17 relational process operations. RTPA can be used to describe both logical and physical models of software and intelligent systems. Logic views of system architectures and their physical platforms can be described using the same set of notations. When a system architecture is formally modeled, the static and dynamic behaviors performed on the architectural model can be specified by a three-level refinement scheme at the system, class, and object levels in a top-down approach. RTPA has been successfully applied in real-world system modeling and code generation for software systems, human cognitive processes, and intelligent systems.

Chapter 11, “A Denotational Semantics of Real-Time Process Algebra (RTPA)” by Xinming Tan and Yingxu Wang seeks a new framework for modeling time and processes in order to represent RTPA in deno¬tational semantics. Within this framework, time is modeled by the elapse of process execution. The process environment encompasses states of all variables represented as mathematical maps, which project variables to their corresponding values. Duration is introduced as a pair of time intervals and the environment to represent the changes of the process environment during a time interval. Temporal ordered durations and operations on them are used to denote process executions. On the basis of these means, a comprehensive set of denotational semantics for RTPA are systematically developed and formally expressed.

Chapter 12, “An Operational Semantics of Real-Time Process Algebra (RTPA)” by Yingxu Wang and Cyprian F. Ngolah presents an operational semantics of RTPA, which explains how syntactic constructs in RTPA can be reduced to values on an abstract reduction machine. The operational semantics of RTPA provides a comprehensive paradigm of formal semantics that establishes an entire set of operational semantic rules of software. RTPA has been successfully applied in real-world system modeling and code generation for software systems, human cognitive processes, and intelligent systems.

Chapter 13, “Formal Modeling and Specification of Design Patterns Using RTPA” by Yingxu Wang and Jian Huang reveals that a pattern is a highly complicated and dynamic structure for software design encapsulation, because of its complex and flexible internal associations between multiple abstract classes and instantiations. The generic model of patterns is not only applicable to existing patterns’ description and comprehension, but also useful for future patterns’ identification and formalization.

Chapter 14, “Deductive Semantics of RTPA” by Yingxu Wang presents a complete paradigm of formal semantics that explains how deductive semantics is applied to specify the semantics of real-time process algebra (RTPA) and how RTPA challenges conventional formal semantic theories. Deductive semantics can be applied to define abstract and concrete semantics of programming languages, formal notation systems, and large-scale software systems, to facilitate software comprehension and recognition, to support tool development, to enable semantics-based software testing and verification, and to explore the semantic complexity of software systems. Deductive semantics may greatly simplify the description and analysis of the semantics of complicated software systems specified in formal notations and implemented in programming languages.

Chapter 15, “On the Big-R Notation for Describing Interative and Recursive Behaviors” by Yingxu Wang, introduces the big-R notation that provides a unifying mathematical treatment of iterations and recursions in computing. Mathematical models of iterations and recursions are developed using logical inductions. Based on the mathematical model of the big-R notation, fundamental properties of iterative and recursive behaviors of software are comparatively analyzed. The big-R notation has been adopted and implemented in Real-Time Process Algebra (RTPA) and its supporting tools. Case studies demonstrate that a convenient notation may dramatically reduce the difficulty and complexity in expressing a frequently used and highly recurring concept and notion in computing and software engineering.

Chapter 16, “Formal RTPA Models for a Set of Meta-Cognitive Processes of the Brain” by Yingxu Wang, describes the cognitive processes modeled at the metacognitive level of the layered reference mode of the brain (LRMB), which encompass those of object identification, abstraction, concept establishment, search, categorization, comparison, memorization, qualification, quantification, and selection. It is recognized that all higher layer cognitive processes of the brain rely on the metacognitive processes. Each of this set of fundamental cognitive processes is formally described by a mathematical model and a process model. Real-time process algebra (RTPA) is adopted as a denotational mathematical means for rigorous modeling and describing the metacognitive processes. All cognitive models and processes are explained on the basis of the object-attribute-relation (OAR) model for internal information and knowledge representation and manipulation.

Chapter 17, “Unifying Rough Set Analysis and Formal Concept Analysis Based on a Logic Approach to Granular Computing” by Bing Zhou and Yiyu Yao examines a logic approach to granular computing for combining rough set analysis and formal concept analysis. Following the classical interpretation of concepts that a concept consists of a pair of an extension and an intension, the authors interpret a granule as a pair containing a set of objects and a logic formula describing the granule. The building blocks of granular structures are basic granules representing elementary concepts or pieces of knowledge. They are treated as atomic formulas of a logic language. Different types of granular structures can be constructed by using logic connectives. Within this logic framework, this chapter shows that rough set analysis and formal concept analysis can be interpreted uniformly by using the proposed logic language. The two theories share high-level similarities, but differ in their choices of definable granules and granular structures. Algorithms and evaluation measures can be designed uniformly for both theories.

Chapter 18, “On Foundations and Applications of the Paradigm of Granular Rough Computing” by Lech Polkowski and Maria Semeniuk-Polkowska, addresses the need for formal methods of granulation, and means for computing with granules by applying methods of rough mereology. Rough mereology is an extension of mereology taking as the primitive notion the notion of a part to a degree. Granules are formed as classes of objects which are a part to a given degree of a given object. In addition to an exposition of this mechanism of granulation, the authors point also to some applications like granular logics for approximate reasoning and classifiers built from granulated data sets.

Chapter 19, “Robust Independent Component Analysis for Cognitive Informatics” by N. Gadhok and W. Kinsner evaluates the outlier sensitivity of five independent component analysis (ICA) algorithms (FastICA, Extended Infomax, JADE, Radical, and β-divergence) using (a) the Amari separation performance index, (b) the optimum angle of rotation error, and (c) the contrast function difference in an outlier-contaminated mixture simulation. The Amari separation performance index has revealed a strong sensitivity of JADE and FastICA (using third- and fourth-order nonlinearities) to outliers. However, the two contrast measures demonstrated conclusively that β-divergence is the least outlier-sensitive algorithm, followed by Radical, FastICA (exponential and hyperbolic-tangent nonlinearities), Extended Infomax, JADE, and FastICA (third- and fourth-order nonlinearities) in an outlier-contaminated mixture of two uniformly distributed signals. The novelty of this chapter is the development of an unbiased optimization-landscape environment for assessing outlier sensitivity, as well as the optimum angle of rotation error and the contrast function difference as promising new measures for assessing the outlier sensitivity of ICA algorithms.

Chapter 20, “A Relative Fractal Dimension Spectrum for a Perceptual Complexity Measure” W. Kinsner and R. Dansereau presents a derivation of a new relative fractal dimension spectrum, DRq, to measure the dissimilarity between two finite probability distributions originating from various signals. This measure is an extension of the Kullback-Leibler (KL) distance and the Rényi fractal dimension spectrum, Dq. Like the KL distance, DRq determines the dissimilarity between two probability distibutions X and Y of the same size, but does it at different scales, while the scalar KL distance is a single-scale measure. Like the Rényi fractal dimension spectrum, the DRq is also a bounded vectorial measure obtained at different scales and for different moment orders, q. However, unlike the Dq, all the elements of the new DRq become zero when X and Y are the same. Experimental results show that this objective measure is consistent with the subjective mean-opinion-score (MOS) when evaluating the perceptual quality of images reconstructed after their compression. Thus, it could also be used in other areas of cognitive informatics.

Chapter 21, “3D Object Classification Based on Volumetric Parts” by Weiwei Xing, Weibin Liu, and Baozong Yuan, proposes a 3D object classification approach based on volumetric parts, where Superquadric based Geon (SBG) description is implemented for representing the volumetric constituents of 3D object. In the approach, 3D object classification is decomposed into the constrained search on interpretation tree and the similarity measure computation. First, a set of integrated features and corresponding constraints are presented, which are used for defining efficient interpretation tree search rules and evaluating the model similarity. Then a similarity measure computation algorithm is developed to evaluate the shape similarity of unknown object data and the stored models. By this classification approach, both whole and partial matching results with model shape similarity ranks can be obtained; especially, focus match can be achieved, in which different key parts can be labeled and all the matched models with corresponding key parts can be obtained. Some experiments are carried out to demonstrate the validity and efficiency of the approach for 3D object classification.

Chapter 22, “Modeling Underwater Structures” by Michael Jenkin, Andrew Hogue, Andrew German, Sunbir Gill, Anna Topol, and Stephanie Wilson, investigates techniques and technologies to address the problem of the acquisition and representation of complex environments such as those found underwater. The underwater environment presents many challenges for robotic sensing including highly variable lighting and the presence of dynamic objects such as fish and suspended particulate matter. The dynamic six-degree-of-freedom nature of the environment presents further challenges due to unpredictable external forces such as current and surge. In order to address the complexities of the underwater environment, the authors have developed a stereo vision-inertial sensing device that has been successfully deployed to reconstruct complex 3-D structures in both the aquatic and terrestrial domains. The sensor combines 3-D information, obtained using stereo vision, with 3DOF inertial data to construct 3-D models of the environment. Semiautomatic tools have been developed to aid in the conversion of these representations into semantically relevant primitives suitable for later processing. Reconstruction and segmentation of underwater structures obtained with the sensor are presented.

Chapter 23, “A Novel Plausible Model for Visual Perception” by Zhiwei Shi, Zhongzhi Shi, and Hong Hu proposes a novel plausible model, namely cellular Bayesian networks (CBNs), to model the process of visual perception. The new model takes advantage of both the low-level visual features, such as colors, textures, and shapes, of target objects and the interrelationship between the known objects, and integrates them into a Bayesian framework, which possesses both firm theoretical foundation and wide practical applications. The novel model successfully overcomes some weakness of traditional Bayesian Network (BN), which prohibits BN being applied to large-scale cognitive problem. The experimental simulation also demonstrates that the CBNs model outperforms purely Bottom-up strategy 6% or more in the task of shape recognition. Finally, although the CBNs model is designed for visual perception, it has great potential to be applied to other areas as well.

Chapter 24, “An Efficient and Automatic Iris Recognition System Using ICM Neural Network” by Guangzhu Xu, Yide Ma, and Zaifeng Zhang, presents an efficient and automatic iris recognition system using Intersecting Cortical Model (ICM) neural network is presented which includes two parts mainly. The first part is image preprocessing which has three steps. First, iris location is implemented based on local areas. Then the localized iris area is normalized into a rectangular region with a fixed size. At last the iris image enhancement is implemented. In the second part, the ICM neural network is used to generate iris codes and the Hamming Distance between two iris codes is calculated to measure the dissimilarity. In order to evaluate the performance of the proposed algorithm, CASIA v1.0 iris image database is used and the recognition results show that the system has good performance.

Chapter 25, “Neural Networks for Language Independent Emotion Recognition in Speech” by Yongjin Wang, Muhammad Waqas Bhatti, and Ling Guan, introduces a neural network based approach for the identification of human affective state in the speech signal. A group of potential features are first identified and extracted to represent the characteristics of different emotions. To reduce the dimensionality of the feature space, whilst increasing the discriminatory power of the features, the authors introduce a systematic feature selection approach which involves the application of sequential forward selection (SFS) with a general regression neural network (GRNN) in conjunction with a consistency-based selection method. The selected parameters are employed as an input to the modular neural network, consisting of sub-networks, where each sub-network specializes in a particular emotion class. Comparing with the standard neural network, this modular architecture allows decomposition of a complex classification problem into small subtasks such that the network may be tuned based on the characteristics of individual emotion. The performance of the proposed system is evaluated for various subjects, speaking different languages. The results show that the system gives quite satisfactory emotion detection performance, yet demonstrates a significant increase in versatility through its propensity for language independence.

Chapter 26, “An Analysis of Internal Representations for Two Artificial Neural Networks that Classify Musical Chords” by Vanessa Yaremchuk demonstrates that the internal structure of the chord classification networks can be interpreted. It reveals the first network classified chord structure first by representing individual notes in terms of circles of major thirds and major seconds, and then by combining these representations to position chords in a three-dimensional hidden unit space. Despite use of a different local representation of input chords, interpretation of the second network reveals a very strong tendency to adopt a transformation of input similar to that observed in the first network. While there is a growing body of evidence concerning specialised neural processing of tones and chords (e.g., Peretz & Zatorre, 2005), this evidence is not yet sufficiently precise to indicate whether distributed representations based on tone circles are used by the brain. A search for examples of an ANN reorganising an input encoding scheme into this type of representation, was not successful. This raises the question of whether circles of thirds and seconds are pertinent to human subjects’ representation of musical stimuli.

Chapter 27, “Foundation and Classification of Nonconventional Neural Units and Paradigm of Nonsynaptic Neural Interaction” by I. Bukovsky, J. Bila, M. M. Gupta, Z-G Hou, N. Homma, introduces basic types of nonconventional neural units and focuses their mathematical notation and classification. Namely, the notation and classification of higher-order nonlinear neural units, time-delay dynamic neural units, and time-delay higher-order nonlinear neural units is introduced. The classification of nonconventional neural units is founded first according to nonlinearity of aggregating function, second according to the dynamic order, third according to time-delay implementation within neural units. Introduction into the simplified parallel of the higher-order nonlinear aggregating function of higher-order neural units revealing both the synaptic and nonsynaptic neural interaction is made; thus, a new parallel between the mathematical notation of nonconventional neural units and the neural signal processing of biological neurons and is drawn. Based on the mathematical notation of neural input inter-correlations of higher-order neural units, it is shown that higher-order polynomial aggregating function of neural inputs can be naturally understood as a single-equation representation consisting of synaptic neural operation plus nonsynaptic neural operation. Thus it unravels new simplified yet universal mathematical insight into understanding the higher computational power of neurons that also conforms to biological neuronal morphology according to nowadays achievements of biomedical sciences.

Chapter 28, “Scaling Behavior of Maximal Repeat Distributions in Genomic Sequences – A Randomize Test Follow Up Study,” J.D. Wang and Ka-Lok Ng conducts a follow-up analysis of maximal repeat distribution analysis by plotting the relative frequency of maximal repeat patterns against the rank of the appearance. In almost all of the cases, the rank plots give a better coefficient of determination values than the authors’ previous work, i.e. frequency plot was used. A randomized version is repeated for the maximal repeat study; it is found that rank plot regression analysis did not support scaling behavior; hence, the validity of the findings is not due to an artifact.

*Discoveries and Breakthroughs in Cognitive Informatics and Natural Intelligence* examines innovative research in the emerging, multidisciplinary field of cognitive informatics. Researchers, practitioners and students can benefit from discussions of the connections between natural science and informatics that are investigated in this fundamental collection of cognitive informatics research.

Dr. Wang is the initiator of a few cutting-edge research fields such as cognitive informatics, denotational mathematics (concept algebra, process algebra, system algebra, semantic algebra, inference algebra, big data algebra, fuzzy truth algebra, and fuzzy probability algebra, visual semantic algebra, granular algebra), abstract intelligence (?I), mathematical models of the brain, cognitive computing, cognitive learning engines, cognitive knowledge base theory, and basic studies across contemporary disciplines of intelligence science, robotics, knowledge science, computer science, information science, brain science, system science, software science, data science, neuroinformatics, cognitive linguistics, and computational intelligence. He has published 400+ peer reviewed papers and 29 books in aforementioned transdisciplinary fields. He has presented 28 invited keynote speeches in international conferences. He has served as general chairs or program chairs for more than 20 international conferences. He is the recipient of dozens international awards on academic leadership, outstanding contributions, best papers, and teaching in the last three decades. He is the most popular scholar of top publications at University of Calgary in 2014 and 2015 according to RG worldwide stats.