In this chapter, we focus on the issue of understanding in various types of agents. Our main goal is to build up notions of meanings and understanding in neutral and non-anthropocentric terms that would not exclude preverbal living organisms and artificial systems by definition. By analyzing the evolutionary context of understanding in living organisms and the representation of meanings in several artificially built systems, we come to design principles for building “understanding” artificial agents and formulate necessary conditions for the presence of inherent meanings. Such meanings should be based on interactional couplings between the agents and their environment, and should help the agents to orient themselves in the environment and to satisfy their goals. We explore mechanisms of action-based meaning construction, horizontal coordination, and vertical transmission of meanings and exemplify them with computational models.
Theories Of Meaning
Philosophers and linguists have studied the big question of “what does it mean to mean something” for many centuries. Nowadays, the study of meaning is mainly in the realm of Semantics and semiotics. In denotational Semantics, linguistic meanings are some objects. Concerning the nature of these objects, the fundamental distinction should be made between the realist and cognitive (or conceptualist) approaches. In the realist approach, meanings are some entities “out there” in the world. In the cognitive approach, meanings are mental entities “in the head”. Gärdenfors (2000) characterizes cognitive Semantics by the following six tenets: