Artificial Intelligence and Marketing: Progressive or Disruptive Transformation? Review of the Literature

Artificial Intelligence and Marketing: Progressive or Disruptive Transformation? Review of the Literature

Copyright: © 2023 |Pages: 24
DOI: 10.4018/978-1-6684-8958-1.ch005
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Recent advances in artificial intelligence have had a growing impact, and the marketing discipline is no exception. This study carries out an analysis of the impact of artificial intelligence, presenting first its different techniques and then the impact of these techniques on marketing areas and activities. A systematic study or literature review was used. The following marketing areas and activities have been identified: product recommendation, brand and trademarks management, purchase decision forecast, price, advertising management, chatbots, distribution and retail, engagement, and planning process. Findings show that artificial intelligence is having a widespread impact across marketing areas and activities, and they also show that this impact is disrupting marketing areas and activities, as well as causing gradual changes.
Chapter Preview
Top

Artificial Intelligence

Marketing is no exception to how quickly artificial intelligence is changing how firms run. AI has become a powerful tool for enhancing the efficacy and efficiency of marketing efforts in recent years. With its capacity to analyze enormous volumes of data, AI technology may be used in a wide range of activities (Gkikas & Theodoridis, 2019; Mariani et al., 2023; Ruiz-Real et al., 2021; Van Esch & Black, 2021). But the term artificial intelligence is a somewhat heterogeneous and broad set of disciplines (Buchanan, 2005; Frankish & Ramsey, 2014; Xu et al., 2020):

  • 1.

    Machine Learning is a subset of AI that deals with algorithms that allow systems to improve from experience without being explicitly programmed.

  • 2.

    Natural Language Processing (NLP) deals with enabling machines to understand, interpret, and generate human language.

  • 3.

    Reinforcement Learning is a type of machine learning where an agent learns to make decisions by performing actions in an environment and receiving rewards or penalties.

  • 4.

    Generative Models can generate new data that is similar to previously seen data.

  • 5.

    Deep Learning is a subfield of machine learning that uses deep neural networks to model complex patterns in data.

  • 6.

    Neural Networks are a type of machine learning model inspired by the structure and function of the human brain.

  • 7.

    Evolutionary Computation is a set of techniques for optimization and problem-solving that are inspired by biological evolution.

  • 8.

    Genetic Algorithms are a type of optimization algorithm inspired by the process of natural selection and genetics that use techniques from evolutionary biology, such as mutation and crossover, to generate new solutions to a problem, and have been applied to a wide range of optimization problems, including but not limited to scheduling, routing, resource allocation, parameter tuning, and design optimization.

  • 9.

    Genetic Programming (GP) is a form of a genetic algorithm that uses evolution to generate computer programs, rather than just numerical solutions to problems.

  • 10.

    Fuzzy Logic is a mathematical framework for dealing with uncertainty and imprecision, which has been applied to many AI problems.

  • 11.

    Bayesian Networks are a probabilistic graphical model that represents relationships among variables and their probabilities.

Key Terms in this Chapter

Machine Learning: Machine learning is a subfield of artificial intelligence that focuses on the development of algorithms and statistical models that allow computers to learn from and make predictions or decisions based on data, without being explicitly programmed. In machine learning, an algorithm is trained on a large dataset and uses that data to make predictions or decisions based on new, unseen data. The goal of machine learning is to build models that can automatically improve their accuracy over time, by continuously learning from the data they process.

Bayesian Networks: Also known as Belief Networks or Bayesian Belief Networks are a type of probabilistic graphical model that represents relationships between variables and the uncertainty surrounding those relationships. The basic building block of a Bayesian network is a node, which represents a variable, and the edges between nodes represent the relationships or dependencies between variables. The probability distribution of each variable is modeled using a set of conditional probabilities, which describe how the variable depends on its parent variables in the network. Bayesian networks can be used to perform tasks such as probability inference, which involves computing the probability of a particular event given certain observations, and decision-making, which involves choosing the best course of action based on uncertain information. They are particularly useful in applications where the relationships between variables are complex and there is a high degree of uncertainty.

Neural Networks: A neural network is a type of artificial intelligence model that is inspired by the structure and function of the human brain. It is a collection of interconnected nodes or “neurons” that process and transmits information, allowing the network to learn from examples and make predictions or decisions based on input data. Neural networks can be trained to perform a variety of tasks, such as image recognition, speech recognition, and natural language processing.

Deep Learning: Deep learning is a subfield of machine learning that focuses on using artificial neural networks with multiple layers (hence “deep”) to learn from and make predictions or decisions based on large amounts of complex and diverse data. Unlike traditional neural networks, deep learning algorithms use multiple layers of interconnected nodes to process and analyze data, allowing them to learn and make decisions at a higher level of abstraction.

Fuzzy Logic: This is a mathematical framework for representing and processing uncertainty and imprecision in information. It was developed as an alternative to classical (or “crisp”) logic, which only deals with binary values of true and false. In fuzzy logic, information is represented by degrees of truth, rather than binary values, allowing for more nuanced and flexible representations of uncertainty. In fuzzy logic, variables can take on values between 0 and 1, representing the degree to which they are true or false. These degrees of truth can be combined using fuzzy rules, which are statements that relate the values of multiple variables to each other. The output of a fuzzy system is a fuzzy set, which represents the degree of membership of the system's output in various possible categories. This output can then be defuzzified, or translated into a crisp value, by using various methods, such as the center of gravity or the mean of maximum.

Genetic Algorithm: This is a type of optimization algorithm that is inspired by the process of natural selection and evolution. It is a search-based algorithm that uses principles of genetics and natural selection to find solutions to optimization problems. In a genetic algorithm, a population of potential solutions is generated, and each solution is evaluated based on a fitness function. The solutions with the highest fitness are used to generate new solutions through the process of selection, crossover, and mutation, in which parts of the high-fitness solutions are combined to create new, potentially better solutions. This process is repeated over several generations until a satisfactory solution is found or a stopping criterion is reached. Genetic algorithms are commonly used in a variety of fields, including engineering, finance, and gaming, to solve complex optimization problems that are difficult to solve with traditional optimization methods.

Natural Language Processing: Is a subfield of artificial intelligence and computer science that focuses on the interaction between computers and humans using natural language. The goal of NLP is to develop algorithms and models that enable computers to understand, interpret, and generate human language. NLP has a wide range of applications, including text classification, sentiment analysis, machine translation, and question-answering.

Support Vector Machine: SVM is a supervised learning algorithm that uses a boundary, called a hyperplane, to separate the data into different classes. The hyperplane is chosen in such a way that it maximizes the margin, or the distance, between the closest data points from each class, called support vectors. SVM can handle non-linearly separable data by transforming it into a higher dimensional space where a linear boundary can be applied. SVM is commonly used for text classification, image recognition, and bioinformatics, among other applications. It is known for its high accuracy, ability to handle large datasets and robustness to outliers.

Complete Chapter List

Search this Book:
Reset