Potentially Relevant Digital Technologies

Potentially Relevant Digital Technologies

DOI: 10.4018/978-1-5225-5390-8.ch002
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this chapter, Smith and Pourdehnad discuss two digital technologies, Artificial Intelligence (AI) and Crowdsourcing, that are not considered fundamental to current applications of the Fourth Industrial Revolution. These authors believe that these technologies are well worth consideration for some industries or applications as explained in this chapter. AI has become a practical reality in recent years through new tangible innovations such as the advent of self-driving cars, and some very human-like robots, and also as a result of improved machine learning experience. As familiarity with machine learning has grown, so have AI applications. AI is discussed in the first section together with Machine Learning; Crowdsourcing is discussed at length in the second main section.
Chapter Preview
Top

Artificial Intelligence (Ai)

Artificial Intelligence (AI) is intelligence that is demonstrated by machines that ‘perceive’ their environment and take actions that maximize the chances that these machines successfully achieving their goal(s). The term “artificial intelligence” is typically applied when a machine mimics the cognitive functions that humans associate with the human mind, such as learning or solving a problem. AI has a long history as the basis for Hollywood science-fiction movies, and futuristic fictional literature popularized by authors such as Isaac Asimov. However, AI has become a practical reality in recent years through new tangible innovations such as the advent of self-driving cars (Globe & Mail, 2017), and very human-like robots such as Sophia (Henson, 2017a). The Internet has played an important role in these developments, in particular by providing the platform for AI enabled services. Machine Learning is critical to AI, and in fact one could say “Machine Learning is Al”. Machine Learning was briefly discussed in the first section of Chapter 1 in relation to Cloud computing, and is further discussed here.

Machine learning involves algorithms that are fundamental to the viability of AI; algorithms are a sequence of instructions used to solve a problem. Algorithms may be developed by AI-programmers to instruct computers in new tasks and are the building blocks of The Fourth Industrial Revolution. Computer algorithms can organize enormous amounts of data into information and services, based on certain instructions and rules.

Instead of a computer programmer programming the computer every step of the way, Machine Learning provides the computer with instructions that facilitate the computer itself learning from the data, without input of new step-by-step instructions from the programmer. This means computers can be used for new, complicated tasks that could not be manually programmed.

In other words, the basic process of machine learning is to provide training data to a learning algorithm. The learning algorithm then generates a new set of rules, based on inferences it made based on the data, and a new algorithm is generated; this is referred to formally as “the machine learning model”. By supplying different training data, the same learning algorithm could be used to generate different machine learning models. For example, the same type of learning algorithm could be used to teach the computer how to translate many languages. This highlights the critical role of data: the more data available to train the algorithm, the more it learns. This highlights the fact that many recent advances in AI have not been due to radical innovations in learning algorithms, but rather due to the enormous amounts of data enabled by the Cloud and the Internet. Inferring new instructions from these data is a fundamental strength of machine learning

A machine learning model may apply a mix of different techniques but the methods for learning can typically be categorized into three general types (Internet Society, 2017):

  • Supervised Learning: The learning program (algorithm) receives labeled data and the form of output desired from its programmer. For example, pictures of a lion labelled “lion” will be supplied to assist the algorithm to identify and build its rules for classifying pictures of lions.

  • Unsupervised Learning: Un-labelled data is passed to the learning algorithm, and the algorithm identifies the relevant patterns in the input data. For example, if economy is a goal for an online delivery route website, the learning algorithm will identify the locations that that are typically serviced together.

  • Reinforcement Learning: The algorithm interacts with a dynamic environment that provides feedback in terms of “rewards” and “punishments”. For example, self-driving cars are rewarded if they remain accident free and stay on the road.

Machine learning is not new. The current growth in AI and machine learning is tied to developments in three important areas:

  • Cloud computing makes profuse data readily available;

  • Computers are more powerful; and

  • The Internet is reliable and widespread.

Complete Chapter List

Search this Book:
Reset