Thinking Animals and Thinking Machines in Psychoanalysis and Beyond

Thinking Animals and Thinking Machines in Psychoanalysis and Beyond

Franco Scalzone (Italian Psychoanalytical Society, Italy) and Gemma Zontini (Italian Psychoanalytical Society, Italy)
DOI: 10.4018/978-1-4666-2077-3.ch003


In this chapter, the authors examine some similarities between computer science and psychoanalysis, and formulate some hypotheses by bringing closer the statute of connectionism to the energetic model of the psychic apparatus as well as the OOP (object-oriented programming) to the object relations theory. The chapter also describes the relation existing between the functioning of mnemic systems and human temporalities as dynamic structures/processes which might be represented as complementary images of each other. The authors make some remarks on the machine and people theme, the way in which men relate to machines, especially “thinking machines,” describing the fantasies they arouse. In order to do this, the chapter uses Tausk’s classic (1919/1933) “On the Origin of the ‘Influencing Machine’ in Schizophrenia”1, as well as some of Freud’s writings.
Chapter Preview


“Computers are not appropriate models of brain, but they are the most powerful heuristic tool we have with which to try to understand the matter of the mind.” (Edelman, 1992, p. 194)

As Turkle(1988) points out, what we may call classic artificial intelligence (AI) is too often viewed only as computation or as procedures for information processing; it is therefore mainly connected to cognitivism. Such conceptual collocation has driven away from AI the interests of many psychoanalysts, notwithstanding many attempts, such as Erdelyi’s (1985), Bucci’s (1997) and others’, to build a bridge between psychoanalysis and cognitive psychology.

AI studies intelligence in an indirect way, trying to build machines capable of intelligent behaviour but without paying too much attention to the peculiar features of human intelligence. Its method is the programming of calculators so that they may show some intelligent capabilities. We may say that there are essentially two ways to try and simulate intelligent human processes through computer modelling: either we start from the symbolic functions of a higher level then trying to break them down into lower level subfunctions (top-down method) or we start from the attempt to reproduce low level functions, or even the hardware, to then make our way up to high level symbolic functions.

Edelman (1992), with his neural Darwinism theory, shows us just how our brain is able, through neural group segregation (TNGS), to operate in a bottom-up mode, that is to say able to self-learn and self–organise when faced with an unlabelled world. We can say that the first method to simulate cognitive processes is the one carried out by classic AI, which some would like to merely call cognitive simulation, and the second one is the one by emergent AI.

If we consider the reductionistic position which homologises mind and calculator, we see that, for the time being, the only mental activities which can be simulated on a calculator are the perceptive cognitive and logical ones. As it is known, the latter are the simplest ones to simulate through ad hoc algorithms, while much harder, for instance, is the simulation of perceptive processes.

We witness the apparent paradox of how the ones which appear as mental activities difficult for men to carry out, such as complex mathematical calculations, are rapidly performed by computers. On the contrary, psychological functions, which we deem as simple, (common sense), which we are all capable of performing without even realising it but which pertain subjectivity and can’t be so easily expressed in as formalised a way through an algorithm, entail extreme processing complexity and a difficult theoretical explanation as well as the impossibility, at least for now, of being simulated on a computer.

AI has found remarkable difficulties every time it has been faced with the solution of aleatory problems, as in the simulation of vision in order to recognise forms with irregular contours, or in analogical reasoning etc. Also emotions are indirect and secondary products of the functioning of the structure and of the way in which it is organised and they cannot be reproduced through an effective procedure, through a programme. What a machine would be lacking is the qualitative character of the conscious experience (qualia). They, the emotions, can be found in no place and in no particular level, they too are distributed functions and emerge from the complexity of the structural organisation.

Complete Chapter List

Search this Book: