Quantitative Semantic Analysis and Comprehension by Cognitive Machine Learning

Quantitative Semantic Analysis and Comprehension by Cognitive Machine Learning

Yingxu Wang, Mehrdad Valipour, Omar A. Zatarain
DOI: 10.4018/978-1-7998-2460-2.ch035
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Knowledge learning is the sixth and the most fundamental category of machine learning mimicking the brain. It is recognized that the semantic space of machine knowledge is a hierarchical concept network (HCN), which can be rigorously represented by formal concepts in concept algebra and semantic algebra. This paper presents theories and algorithms of hierarchical concept classification by quantitative semantic analysis based on machine learning. Semantic equivalence between formal concepts is rigorously measured by an Algorithm of Concept Equivalence Analysis (ACEA). The semantic hierarchy among formal concepts is quantitatively determined by an Algorithm of Relational Semantic Classification (ARSC). Experiments applying Algorithms ACEA and ARSC on a set of formal concepts have been successfully conducted, which demonstrate a deep machine understanding of formal concepts and quantitative relations in the hierarchical semantic space by machine learning beyond human empirical perspectives.
Chapter Preview
Top

1. Introduction

It is recognized that a fundamental challenge to machine learning is knowledge learning mimicking the brain (Wang, 2015a, 2016) beyond traditional object identification, cluster classification, pattern recognition, functional regression, and behavior acquisition (Russell & Norvig, 2010; Mehryar et al., 2012; Russell & Norvig, 2010; Wang, 2015a). This leads to an emerging field of cognitive machine learning (Wang, 2010, 2015a, 2016a) on the basis of recent breakthroughs in denotational mathematics (Wang, 2002, 2012, 2013, 2015b) and mathematical engineering (Wang, 2015c, 2016a) such as concept algebra (Wang, 2015b), semantic algebra (Wang, 2013; Wang & Berwick, 2013), inference algebra (Wang, 2011), and Real-Time Process algebra (RTPA) (Wang, 2002). Cognitive machine learning is underpinned by basic studies on cognitive informatics (Wang, 2003, 2007) and cognitive computing (Wang, 2009) such as the Layered Reference Model of the Brain (LRMB) (Wang et al., 2006), the Theory of Abstract Intelligence (αI) (Wang, 2007a), Mathematical Models of Human Memories (Wang, 2007b), Mathematical Models of Cognitive Computing (MMCC) (Wang, 2009), Mathematical Models of Cognitive Neuroinformatics (MMCN) (Wang, 2003), Mathematical Models of Cognitive Linguistics (MMCL) (Wang & Berwick, 2012, 2013), and the Cognitive Models of Brain Informatics (CMBI) (Wang, 2015a; Wang & Wang, 2006).

Concepts as the basic carrier of semantics in human memory for knowledge representation are studied in linguistics and cognitive psychology (Belohlave & Klir, 1956; Chomsky, 1956, 2007; Harris, 2006; Sternberg, 2006; Lefton et al., 2008; Saeed, 2009; Machery, 2011; Wang & Berwick, 2012). In computational linguistics, lexis and semantics are studied in order to represent the relational composition of words in machine-interpretable lexical structures such as WordNet (Miller, 1990) and ConceptNet (Havasi et al., 2007). The cognitive properties of language expressions and knowledge are explored in cognitive science, computational linguistics, and cognitive computing (Harris, 2006; Sternberg, 2006; Machery, 2011; Wang, 2003; Wang & Berwick, 2013).

Complete Chapter List

Search this Book:
Reset