Artificial Minds with Consciousness and Common sense Aspects

Artificial Minds with Consciousness and Common sense Aspects

K.R. Shylaja, M.V. Vijayakumar, E. Vani Prasad, Darryl N. Davis
Copyright: © 2020 |Pages: 20
DOI: 10.4018/978-1-7998-1754-3.ch069
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

The research work presented in this article investigates and explains the conceptual mechanisms of consciousness and common-sense thinking of animates. These mechanisms are computationally simulated on artificial agents as strategic rules to analyze and compare the performance of agents in critical and dynamic environments. Awareness and attention to specific parameters that affect the performance of agents specify the consciousness level in agents. Common sense is a set of beliefs that are accepted to be true among a group of agents that are engaged in a common purpose, with or without self-experience. The common sense agents are a kind of conscious agents that are given with few common sense assumptions. The so-created environment has attackers with dependency on agents in the survival-food chain. These attackers create a threat mental state in agents that can affect their conscious and common sense behaviors. The agents are built with a multi-layer cognitive architecture COCOCA (Consciousness and Common sense Cognitive Architecture) with five columns and six layers of cognitive processing of each precept of an agent. The conscious agents self-learn strategies for threat management and energy level maintenance. Experimentation conducted in this research work demonstrates animate-level intelligence in their problem-solving capabilities, decision making and reasoning in critical situations.
Chapter Preview
Top

Background

There are many existing cognitive architectures that are built to test and implement cognitive capabilities of the human mind. The Emotion Machine Architecture (EM-ONE) demonstrated human common sense thinking capability in the Roboverse environment (Singh, 2005; Minsky, 2006). The Computational Model for Affect Motivation and Learning (CAMAL) (Darryl & Suzanne, 2004; Darryl, 2010, 2002, 2001) architecture emulates emotions. The Society of Mind Cognitive Architecture (SMCA) investigated the concept of mind as a control system by using the “Society of Agents” metaphor that uses fungus eater testbed (Vijaykumar & Darryl, 2008; Vijaykumar, 2008). The CERA-CREMIUM architecture of Arrabales (2009) demonstrated different levels of consciousness on artificial agents. The research work presented in this article attempts to address the problem by using ideas from AI and cognitive science. Cognitive capabilities of animals and humans are evident when they exhibit abilities such as learning, remembering, perceiving, thinking, decision-making, recognizing, and visual, verbal, and language skills in their usual interactions. Cognitive science proposes theories to build artificial minds based on natural mind architectures called cognitive architectures (Anderson, 1993; 1996; Armstrong, 1968). These architectures help in modelling a range of human behaviors into machines to make them intelligent across a diverse set of tasks and domains. The main focus of any cognitive architecture is to represent, organize, utilize, and acquire the knowledge while performing the task (Newell, 1972;1990;1992).

Theory of Conscious Agents

According to Russell (2003), an agent is “anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.” The mapping between the percept sequence and the action chosen is called the agent function, whereas the internal processes that choose actions according to the percept sequence are the agent programs.

Most of the human mental processes are unconscious though humans are considered as highly conscious agents (Bargh & Morsella, 2008). The conscious agents are the entities that exhibit intelligent behavior with properties such as autonomy, reactiveness, and pro-activeness or being rational. According to Donald D Hoffman (2014), the mathematical definition of a conscious agent involves three mental processes such as perception, decision making, and action. An agent being in a conscious state can also have subjective experiences, wishes, beliefs, desires, and complex thoughts (Block,1995; 2002; 2002; 2007; Shoemaker,1996). It should be able to understand a relatively complex sequence of actions at an abstract level and respond to such situations (Franklin, 2009). A minimum prerequisite for conscious agents is social interaction with its peers in the environment.

Complete Chapter List

Search this Book:
Reset