Reference Hub2
An Embodied Logical Model for Cognition in Artificial Cognition Systems

An Embodied Logical Model for Cognition in Artificial Cognition Systems

Guilherme Bittencourt, Jerusa Marchi
Copyright: © 2007 |Pages: 37
ISBN13: 9781599041117|ISBN10: 1599041111|ISBN13 Softcover: 9781599041124|EISBN13: 9781599041131
DOI: 10.4018/978-1-59904-111-7.ch002
Cite Chapter Cite Chapter

MLA

Bittencourt, Guilherme, and Jerusa Marchi. "An Embodied Logical Model for Cognition in Artificial Cognition Systems." Artificial Cognition Systems, edited by Angelo Loula, et al., IGI Global, 2007, pp. 27-63. https://doi.org/10.4018/978-1-59904-111-7.ch002

APA

Bittencourt, G. & Marchi, J. (2007). An Embodied Logical Model for Cognition in Artificial Cognition Systems. In A. Loula, R. Gudwin, & J. Queiroz (Eds.), Artificial Cognition Systems (pp. 27-63). IGI Global. https://doi.org/10.4018/978-1-59904-111-7.ch002

Chicago

Bittencourt, Guilherme, and Jerusa Marchi. "An Embodied Logical Model for Cognition in Artificial Cognition Systems." In Artificial Cognition Systems, edited by Angelo Loula, Ricardo Gudwin, and João Queiroz, 27-63. Hershey, PA: IGI Global, 2007. https://doi.org/10.4018/978-1-59904-111-7.ch002

Export Reference

Mendeley
Favorite

Abstract

In this paper we describe a cognitive model based on the Systemic approach and on the Autopoiesis theory. The syntactical definition of the model consists of logical propositions but the semantic definition includes, besides the usual truth value assignments, what we call emotional flavors, which correspond to the state of the agent’s body translated into cognitive terms. The combination between logical propositions and emotional flavors allows the agent to learn and memorize relevant propositions that can be used for reasoning. These propositions are represented in a specific format – prime implicants/implicates – which is enriched with annotations that explicitly store the internal relations among their literals. Based on this representation, a memory mechanism is described and algorithms are presented that learn a proposition from the agent’s experiences in the environment and that are able to determine the degree of robustness of the propositions, given a partial assignment representing the environment state.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.