Artificial Intelligences Are Subsets of Human Beings, Mainly in Fictions and Films: Challenges and Opportunities

Artificial Intelligences Are Subsets of Human Beings, Mainly in Fictions and Films: Challenges and Opportunities

Nandini Sen
DOI: 10.4018/978-1-7998-6985-6.ch028
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter aims to create new knowledge regarding artificial intelligence (AI) ethics and relevant subjects while reviewing ethical relationship between human beings and AI/robotics and linking between the moral fabric or the ethical issues of AI as used in fictions and films. It carefully analyses how a human being will love robot and vice versa. Here, fictions and films are not just about technology but about their feelings and the nature of bonding between AIs and the human race. Ordinary human beings distrust and then start to like AIs. However, if the AI becomes a rogue as seen in many fictions and films, then the AI is taken down to avoid the destruction of the human beings. Scientists like Turing are champions of robot/AI's feelings. Fictional and movie AIs are developed to keenly watch and comprehend humans. These actions are so close to empathy they amount to consciousness and emotional quotient.
Chapter Preview
Top

Introduction

Following Turing, A.M., (1950, p-433) I could have proposed the question, “Can machines think?” Turing’s famous paper (1950) explains the meaning of the terms, “machine” and “think.” I digress from his proposal a bit and further ask if machines can think ethically or if they can dream. The “machine” of the master mathematician has now been transformed into artificial intelligence and much progress has been done in this field. Now the question is that, how far the experts and scientists of artificial intelligence could add the intriguing value of the ethics in their experiments and brought the field of AI in the purview of gender, race, and ethics for the benefit of the human being and the animal kingdom. In the Imitation Game Turing (1950) was trying to observe what were be the changes which could happen if a machine intervened in the process. He is further asking if the investigation is crucial for answering the question – “Can machines think?”

Even before Stephen Hawking and Elon Musk started to structuralise the world of artificial intelligence in 1942, the science fiction writer Isaac Asimov wrote the famous “The Three Laws of Robotics: A moral code to keep our machines in check. And the three laws of robotics are: a robot may not injure a human being, or through inaction allow a human being to come to harm. The second law, a robot must obey orders given by human beings, except where such orders would conflict with the first law. And the third, a robot must protect its own existence as long as such protection does not conflict with the first and the second law.” Apparently these three laws are sufficient for constructing the basis to work from to develop a moral fabric inside the AI. (Asimov, 1990; A Britannica Publishing).

After WWII, several people started to work on intelligent machines. The British pioneer in this field Alan Turing delivered a lecture on AI in 1947. He also is the first to decide that AI was best studied by programming computers rather than by building machines. By the late 1950s, there were many explorers on AI, and most of them were developing their work on programming computers. Alan Turing’s 1950 article Computing Machinery and Intelligence discussed conditions for considering a machine to be intelligent. He argued that if the machine could successfully pretend to be human to an expert observer then it certainly should be considered intelligent. This test would satisfy most people but not all philosophers. The observer could interact with the machine and a human, and the human would try to persuade the observer that it was human, and the machine would try to fool the observer (Anderson, 2017; McCarthy, 1990).

The philosopher John Searle said that the idea of a non-biological machine being intelligent is not rational. The philosopher Hubert Dreyfus said that AI is a kind of utopia. The computer scientist Joseph Weizenbaum said that the idea of AI was dirty, anti-human and unethical. Various people have said that since artificial intelligence hasn’t reached human level by now, it should not be imagined. The highly evolved field of robotics is producing a huge range of machines, from autonomous vacuum cleaners to military drones to entire factory productions. At the same time, artificial intelligence and machine learning are lagging behind the software that helps us daily, whether we’re searching the internet or being given government tasks. In addition, many people say that AI is not a good idea. They even suggest remembering of films like Terminator and the Matrix where AIs had mainly inflicted destruction and/or exploitation of the mankind (Trent. (2012). These developments are rapidly leading to a time when robots of all kinds will become prevalent in almost all aspects of society, and human-robot interactions will rise significantly (Anderson, 2017); (McCarthy, 1990).

This chapter based on the above arguments will examine a logical progression in the evolution of AI. It will present a set of balanced arguments including the debates on the anticipated positives and the apprehended evils. These perhaps exist in the concepts if AI is necessary in human civilisation. It will take us through few sections mainly: literature review; discussion about fictional and filmic AI characters; comparison of real-life AI characters and fictional/filmic AI; ethnography; analysis and conclusion including future research.

Key Terms in this Chapter

Turing’s Test: In 1950 Alan Turing proposed an important condition for considering a machine to be intelligent. He emphasised that if the machines could successfully pass a test that it is a human to a knowledgeable observer then we certainly should consider it intelligent.

Robots: The complex machines performing a program while detecting and overcoming problems and acting towards fulfillment of the human instructions—all this makes a perfect robot. Karel Capek, who coined the term ‘robot’ for machines replace human workers, related robots with slaves. In his play Rossum’s Universal Robots (R.U.R.; 1920) he suggested that robots could replace human slave labor.

Asimov’s Three Laws: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence if such protection does not conflict with the First or Second Laws.

AI Ethics: AI ethics will give an AI consciousness and intelligence to resolve conflicts, crisis, and problems which will ultimately benefit humanity involving both philosophical and technical issues. The AI should be programmed in such a way that the system can eradicate accidents and disasters as much as possible. It must deal with issues like human welfare, justice for most of the people, climate change, and defending the basic rights of human beings.

Artificial Intelligence: It is the science and engineering of constructing intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. They can very easily solve difficult and complex mathematical problems, e.g., they can play chess at a very high level or can drive a motor car on its own under human supervision.

Complete Chapter List

Search this Book:
Reset