Philosophy of computer science is a very young, healthy and productive research field, as we can infer from the great number of academic events and publications held every year all around the world. Besides, it offers a high interdisciplinary exchange of ideas: from philosophical and mathematical logic to epistemology, engineering, ethics or neuroscience experts. New problems are faced with new tools, instrumental as well as theoretical.
For all the previous and next reasons this volume is a very special work: first of all, because it includes the ideas of some of the world leading experts on the field; secondly, because all these experts are not only the established knowledge in the field but also the leading and ongoing research force, they are working in the future of the Philosophy of Computer Science (this is not contemporary scholastics!); third, because young and new researchers shape new directions into the current investigations; fourth, because it includes some brave attempts to change our ideas about human and non-human relationships with the environment.
The book is divided into five sections that cover the principal topics in the field, from the richness of the idea of information (Section I) to its philosophical analysis (Section II), the posterior ethical debate about it (Section III), the specific nature of computer simulations (section IV) and a final space for the crossroads between robotics, artificial intelligence, cognitive theories and philosophy (Section V).Section I. Philosophy of Information
This initial section is devoted to the basic material of computer science: information
. In fact, the idea of information is central to the sciences of 20th as well as of 21st Century, from Biology (the DNA code), to Chemistry, Physics, Mathematics or Philosophy. The analysis of the idea of information from several perspectives offers us the best possible introduction to the field.
In “How to Account for Information
”, Luciano Floridi develops a next step into the philosophy of information studies, from which he is a seminal and leading expert. Prof. Floridi affirms that semantic information is, strictly speaking, inherently truth-constituted
and not a contingent truth-bearer, exactly like knowledge but unlike propositions or beliefs, for example, which are what they are independently of their truth values and then, because of their truth-aptness, may be further qualified alethically.
On the other side, “Information Carrying Relations: Transitive or Non-Transitive
”, of Hilmi Demir, analyzes the fact that a thorough analysis of the fundamental mathematical properties of information-carrying relations has not yet been done. The point here is that a thorough analysis of the fundamental properties of information-carrying relations will shed light on some important controversies. The overall aim of this chapter is to begin this process of elucidation.
The third chapter, “Biological Information as Natural Computation
”, written by Gordana Dodig Crnkovic, proposes a new approach of info-computational naturalism, emerged as a result of the synthesis of paninformationalism (understanding of all physical structures as informational) and pancomputationalism or natural computationalism (understanding of the dynamics of physical structures as computation).
This section ends with another chapter about biological information (one of the key ideas of 21st Century Biology): “On Biological Computing, Information and Molecular Networks
”, by Walter Riofrio. Studying the set of special characteristics molecular networks which constitute living systems might have. It will furthermore permit us to understand the intrinsic relationship produced between the handling of biological information and the start-up of something completely new that is revealed in the set of aspects which bear natural computations within living systems
Section II. Philosophy of Computer Science
Second section contains different chapters situated at the core of Philosophy of Computer Science: the construction of meaning and identity with computational tools, which include references to mathematics, logic programming and philosophical analysis.
First chapter of this section is written by Ray Turner, “Programming Languages as Mathematical Theories
”. He explores the claim that programming languages are (semantically) mathematical theories. This will force him to discuss the normative nature of semantics, the nature of mathematical theories, the role of theoretical computer science and the relationship between semantic theory and language design.
After this deep analysis of the nature of programming languages, Selmer Bringsjord, “The Hypercomputational Argument for Substance Dualism
”, considers (hyper)computational aspects of human cognition and makes a clear written and argued debate on the dualism. Since human persons hypercompute (i.e., they process information in ways beyond the reach of Turing machines), it follows that they aren't physical, i.e., that substance dualism holds. Needless to say, objections to this argument are considered and rebutted.
“Identity in the Real World
”, written by Matteo Casu and Luca Albergante discuss the notion of identity and propose to use “description logics” to describe the properties of objects, and present an approach to relativize Leibniz's Law. This relativization is further developed in a semantic web context, where the utility of their approach is suggested.
Timothy Colburn and Gary Shute are the authors of the next chapter “Knowledge, Truth, and Values in Computer Science
”, in which they argue that knowledge acquisition in computer science fits models as diverse as those proposed by Piaget and Lakatos. However, contrary to natural science, the knowledge acquired by computer science is not knowledge of objective truth, but of values.
After this analysis of values in computer Science, David J. Saab and Uwe V. Riss sign “Logic and Abstraction as Capabilities of the Mind: Reconceptualizations of Computational Approaches to the Mind”. They investigate the nature of abstraction in detail, its entwinement with logical thinking, and the general role it plays for the mind, concluding that Computational minds must be able to imagine and volitionally blend abstractions as a way of recognizing gestalt contexts. And it must be able to discern the validity of these blendings in ways that, in humans, arise from a sensus communis
Finally, an extended number of co-researchers (Alison Pease, Andrew Ireland,Simon Colton, Ramin Ramezani, Alan Smaill, Maria Teresa Llano and Gudmund Grov), explains us how to “Applying Lakatos-style reasoning to AI domains”. In drawing analogies between Lakatos’s theory and these three domains they identify areas of work which correspond to each heuristic, and suggest extensions and further ways in which Lakatos’s philosophy can inform AI problem solving. Thus, they show how we might begin to produce a philosophically-inspired AI theory of combined reasoning.
Section III. Computer and Information Ethics
This third section offers us a different approach to the Computer Science and Information analysis: the ethical one. After discussing in previous sections about the essence of information and its computational meaning, now we must face with the ethical dimensions of the field.
“Deconstructive Design as an Approach for Opening Trading Zones
”, by Doris Allhutter and Roswitha Hofmann, presents a critical approach to software development that implements reflective competences in software engineering teams. Software development is a socio-technological process of negotiation that requires mediation of different approaches. Research on the co-construction of society and technology and on the social shaping of technological artefacts and processes has highlighted social dimensions such as gender and diversity discourses that implicitly inform development practices. They introduce ‘deconstructive design’—a critical-design approach that uses deconstruction as a tool to disclose collective processes of meaning construction. For this purpose, the idea of value-sensitive design is linked to approaches of practice-based, situated and context-sensitive learning and to the concepts of ‘trading zones’ and ‘boundary objects’.
Next author, Luc Schneider, talk us about “Scientific Authorship and E-commons
”. This contribution tries to assess how the Web is changing the ways in which scientific knowledge is produced, distributed and evaluated, in particular how it is transforming the conventional conception of scientific authorship Some strategies and tools that may encourage a change of academic mentality in favour of a conception of scientific authorship modelled on the Open Source paradigm are discussed. “Armchair Warfare ‘on Terrorism’. On Robots, Targeted Assassinations and Strategic Violations of International Law”
is the interesting contribution of Jutta Weber. In the 21st century, militaries are no competing for military dominance through specific superior weapon systems but through networking these systems via information and communication technologies. The ‘Revolution in Military Affairs’ (RMA) relies on network centric warfare, ‘precision’ weaponry and ‘intelligent’ systems such as uninhabited, modular, globally connected robot systems. The question arises whether the new military philosophy, network centric (armchair) warfare, targeted assassinations and robot technology work towards the weakening of international humanitarian law.
Closing this section, Pak-Hang WONG develops an study on “Information Technology, the Good and Modernity
”. According to him, in Information and Computer Ethics (ICE), and, in fact, in the normative and evaluative research of Information Technology (IT) in general, analyses of the prudential values of IT are often neglected by the researchers. I will explain why these analyses are not taken seriously by researchers in ICE, and argue why they should not be neglected. After that, he will outline a framework to analyse and evaluate these analyses, and he will apply the framework to analyse and evaluate an actual prudential analysis, i.e. Nicholas Carr’s “Is Google Making Us Stupid”. Finally, he will briefly conclude this chapter by outlining the limits of the framework proposed in this chapter, and then to identify the further research that that to be done.
Section IV. Simulating reality?
After the previous sections arises a hot topic in Computer Science studies: the nature and epistemic value of scientific computer simulations. In certain areas of Theoretical Physics, for example, the only way to check some hypothesis is to use computer simulations.
With “Computing, Philosophy and Reality: a Novel Logical Approach”
, Joseph Brenner considers that discussion of computational models and approaches should include explicit statements of their underlying worldview, given the fact that reality includes both computational and non-computational domains. A new “Logic in Reality” (LIR) was proposed as best describing the dynamics of real, non-computable processes. A new interpretation of quantum superposition as supporting a concept of paraconsistent parallelism in quantum computing and an appropriate ontological commitment for computational modeling are discussed.
Michael Nicolaidis, propose a computational vision of the Universe with his chapter “Computational Space, Time and Quantum Mechanics
”. The debate between reality, computation, information and quantum systems continues the debate started with Brenner’s chapter, opening an inner debate for the reader about the limits of physical entities (real as well as simulated).
From a cognitive point of view, Jordi Vallverdú, “Seeing for Knowing. The Thomas Effect and Computational Science” makes a study of computer visualization processes, especially about simulations. We don't see through
our instruments, but we see with
them. We have an extended epistemology, which embraces human and instrumental entities. We can make better science because we can deal better with scientific data. But at the same time, the point is not that be ‘see’ better, but that we only can see because we design those cognitive interfaces. Computational simulations are the middleware of our mindware, acting as mediators between our instruments, brains, the worlds and our minds.
The last chapter of this section is devoted to the ontological debate about computer simulations. “Computer simulations and traditional experimentation: from a material point of view
”, written by Juan Manuel Durán, is meant to revisit Francesco Guala’s chapter Models, simulations, and experiments
. The main intention is to arise some reasonable doubts on the conception of ‘ontological account’ described in his work. Section V. Intersections
This last section includes interdisciplinary researches and also theoretical approaches with are made from several perspectives. These chapters are at the same time a meeting point for specialists of different disciplines as well as a starting point to focus in a new manner our own (field) beliefs.
Always provocative and able to translate philosophical ideas to surprising technological realities, Kevin Warwick ask us “What is it like to be a robot?
”. It is now possible to grow a biological brain within a robot body. As an outsider it is exciting to consider what the brain is thinking about, when it is interacting with the world at large, and what issues cause it to ponder on its break times. As a result it appears that it will not be too long before we actually find out what it would really be like to be a robot. Here we look at the technology involved and investigate the possibilities on offer.
Antoni Diller makes a different approach to the analysis of Robotics and AI, explaining “Why AI and Robotics are Going Nowhere Fast”. Considerable progress is being made in AI and Robotics to produce an android with human-like abilities. The work currently being done in mainstream laboratories cannot, unfortunately, succeed in making a machine that can interact meaningfully with people. This is because that work does not take seriously the fact that an intelligent agent receives most of the information he or she needs to be a productive member of society by accepting other people’s assertions After explaining the main reason for this and surveying some of what has been done in AI and philosophy on understanding testimony, by people working outside the mainstream, he presents a theory of testimony and investigate its implementability.
Next chapter represents the essence of interdisciplinary studies in Computer Science: from cognition, to philosophy of mind, logics and robotics, David Casacuberta, Saray Ayala and Jordi Vallverdú explains us how to “Embodying cognition: a morphological perspective”. After several decades of success in different areas and numerous effective applications, algorithmic Artificial Intelligence has revealed its limitations. They need to shift/move from platform-free algorithms to embodied and embedded agents. In this chapter they adhere to a specific reading of the embodied view usually known as enactivism. In particular, they explore the computational role that morphology can play in artificial systems and illustrate their ideas presenting several Lego Mindstorms robots where morphology is critical for the robot’s behaviour.
And last but not least, Klaus Mainzer (“Challenges of Complex Systems in Cognitive and Complex Systems
”). The article analyzes complex systems and the evolution of the embodied mind, complex systems and the innovation of embodied robotics, and finally discusses challenges of handling a world with increasing complexity: Large-scale networks have the same universal properties in evolution and technology. Embodied robots are explained as dynamical systems. Embodied robotics aims at the development of cognitive and conscious robots.