Minds and Machines: Limits to Simulations of Thought and Action

Minds and Machines: Limits to Simulations of Thought and Action

James H. Fetzer (University of Minnesota - Duluth, USA)
Copyright: © 2011 |Pages: 10
DOI: 10.4018/ijsss.2011010103
OnDemand PDF Download:
$30.00
List Price: $37.50

Abstract

Although distinctions can be drawn between relations of simulation, replication, and emulation, basic differences between digital machines and human beings render the strong forms of anticipation that are impossible in principle. Because the use of signs in affecting behavior is dependent on a context of preexisting motives, beliefs, ethics, abilities and capabilities, ontic and epistemic difficulties relative to the complex interaction of distinct variables and relevant conditions to which each person has been subjected in his or her unique life, makes non-trivial explanations and predictions—ones not involving stereotyped or scripted behavior—theoretically impossible, including for the weakest forms of simulation. Indeed, even stereotypical behavior may not be predicable on similar grounds for real, historical human beings, not only in general but even for each single case. This study may even be viewed as an essay about freedom of the will.
Article Preview

Introduction

Searle's “Chinese Room” argument, I take it, establishes that the behavioristic Turing Test criterion does not afford a standard that is theoretically sufficient for the purpose of discriminating between the causal properties of systems with and without mentality or, as he uses the term, intelligence (Searle, 1984). And that is because it does not distinguish between the input/output behavior of systems involving minds, the input/output behavior of systems using minds combined with look-up tables, and the input/output behavior of mindless look-up tables alone (Fetzer, 1995). Properly understood, therefore, Searle’s argument supports the necessity to differentiate between relations of simulation that display the same input/output behavior, of replication by simulations that are brought about by the same or by similar processes, and of emulation, where those replications are produced by systems that are composed of the same kind of stuff (Fetzer, 1990).

Since simulation is the weakest similarity relationship between animate and inanimate systems, the question I am going to address concerns whether an inanimate system, such as a robot, can simulate non-trivial behavior that is displayed by humans as the effects of their internal states of motives, beliefs, ethics, abilities and capabilities, relative to those systems' opportunities (the historical situations in which specific behaviors take place). I have in mind the actual behavior of real persons living their historical lives. One reason for not thinking so is that digital computers—classic von Neumann machines—are not the possessors of minds. At one time, I supposed that this was the key to my argument. But today I think that the ontic and epistemic problems that matter to this question apply across the board, even to other systems that have mentality.

Suppose, for example, that the ontic problems that are confronted here involve the complex causal interplay of values of those kinds, which might assume the form of deterministic causation (where the same cause yields the same effect, in every case without exception) or of indeterministic causation (where the same causes yields one or another effect within the same class of possible outcomes, without exception). But bear in mind that some classes of cases of deterministic causation are chaotic (which entails acute sensitivity to initial conditions), where the least change can bring about the most drastic alteration in effects, such as the use of a comma instead of a period in the program for Mariner I, which has been described as the most expensive grammatical mistake in history (Littlewood & Strigini, 1992), though which mistake occurred is disputed (Mariner 1, Wiki).

Even though future connectionist machines, which employ networks of neuron-like nodes, might eventually be developed that possess the mental capabilities of human minds—which may be more subtle than it seems, since meanings and minds are dependent upon their bodies and behavioral abilities—the prospects for simulations in the mode of replication or of emulation will still tend to be unrealizable, in theory as well as practice, for similar ontic and epistemic reasons (Fetzer, 1992, 1996). Although scripted or stereotypical behaviors—restaurant behavior, conventional exchanges, and ordinary discourse—initially appear to pose no problems for the simulation of input/output behavior, they are subject to parallel constraints since the target may not follow the script, especially when they may be affected by the influence of unconscious or of subconscious factors of which they are unaware.

Complete Article List

Search this Journal:
Reset
Volume 5: 2 Issues (2016)
Volume 4: 2 Issues (2015)
Volume 3: 2 Issues (2014)
Volume 2: 2 Issues (2012)
Volume 1: 2 Issues (2011)
View Complete Journal Contents Listing