Machine Learning

Machine Learning

Ambika P.
DOI: 10.4018/978-1-5225-5972-6.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Machine learning is a subfield of artificial intelligence that encompass the automatic computing to make predictions. The key difference between a traditional program and machine-learning model is that it allows the model to learn from the data and helps to make its own decisions. It is one of the fastest growing areas of computing. The goal of this chapter is to explore the foundations of machine learning theory and mathematical derivations, which transform the theory into practical algorithms. This chapter also focuses a comprehensive review on machine learning and its types and why machine learning is important in real-world applications, and popular machine learning algorithms and their impact on fog computing. This chapter also gives further research directions on machine learning algorithms.
Chapter Preview
Top

Introduction

Machine learning (ML) was introduced in the late 1950’s as a technique for artificial intelligence (AI) (Ayodele, 2010). Over time, its focus evolved and shifted more to algorithms that are computationally viable and robust. In the last decade, machine learning techniques have been used extensively for a wide range of tasks including classification and regression in a variety of application areas such as bioinformatics, speech recognition, spam detection, computer vision and fraud detection. The algorithms and techniques used come from many diverse fields including statistics, mathematics, neuroscience, and computer science.

Classical Definitions of Machine Learning

  • The development of computer models for learning processes that provide solutions to the problem of knowledge acquisition and enhance the performance of developed systems (Duffy,1997).

  • The adoption of computational methods for improving machine performance by detecting and describing consistencies and patterns in training data (Langley & Simon, 1995).

A learning algorithm takes training data as input that represents experience and output will perform some actions. There are two general types of learning Supervised and Unsupervised Learning. Both learning methods need some interaction between learner and the environment. Difference between both learning is explained in the following example. In the task of detecting spam email and anomaly detection, the model gets input as email messages and the task in to detect and assign labels spam or not spam. The above type of learning is referred as supervised. It can also be used to predict the missing portion in the unseen sample. Unsupervised learning processes the input data with the goal of grouping similar objects. Clustering is the typical example of unsupervised learning.

Complete Chapter List

Search this Book:
Reset