Abstract
Recent advances in deep learning methodology led to artificial intelligence (AI) performance achieving and even surpassing human levels in an increasing number of complex tasks. There are many impressive examples of this development such as image classification, sensitivity analysis, speech understanding, or strategic gaming. The estimations based on the AI methods do not give any certain information due to the lack of transparency for the visualization, explanation, and interpretation of deep learning models which can be a major disadvantage in many applications. This chapter discusses studies on the prediction of precious metals in the financial field that need an explanatory model. Traditional AI and machine learning methods are insufficient to realize these predictions. There are many advantages to using explainable artificial intelligence (XAI), which enables us to make reasonable decisions based on inferences. In this chapter, the authors examine the precious metal prediction by XAI by presenting a comprehensive literature review on the related studies.
TopIntroduction
Artificial intelligence (AI) becomes one of the indispensable parts of human life day by day (Iansiti and Lakhani 2020). It is crucial from image and face recognition systems that occur in all kinds of applications to predictive analytics, speech usage, autonomous use, and hyper-personalized systems. AI is heading towards a wide range of sectors such as education, construction, healthcare, manufacturing, law enforcement, and finance. Decisions and predictions are made by AI-powered systems that are mainly used in unmanned cars, healthcare, and even the use of warfare.
Many of us need to know how the decisions are made while artificial intelligence is in progress. Many of the algorithms used for machine learning cannot be studied specifically to understand how and why a decision is made (Iansiti and Lakhani 2020). This is especially true for the most popular algorithms currently used – e.g. deep learning neural network approaches. As humans, we must be able to fully understand how decisions are made so that we can rely on AI decisions. The lack of clarity and trust hampers our ability to fully trust AI systems. Therefore, people expect such systems to produce transparent explanations and reviews of the decisions they make. This is known as Explained AI (XAI).
Explained AI is a fresh field in computer science and machine learning that intends to address how black box decisions are made by artificial intelligence systems. This area examines and tries to understand every single step and models involved in decision making. XAI is expected to answer some hot questions by most business owners, operators, users, and even experts such as: Why did the AI system make a specific estimation or decision? Why did not the AI system do anything else? When did the AI system succeed, and when did it fail? When do AI systems give enough confidence to the decisions you can trust, and how can the AI system correct the errors that occur in the decision making process?
Until now, there has been early, yet new researches and studies in the field of making deep learning approaches to machine learning explicable. However, it is hoped that as well as having both transparency and openness, adequate progress can be made to ensure that we have both strength and accuracy. AI actions should be observable to a certain level. These levels should be determined by results that may result from the AI systems. Systems with more important, fatal or significant consequences must have necessary clarification and transparency requirements to know everything when something goes wrong.
Every single system does not need the same levels of transparency. While it is not possible to standardize algorithms and even XAI approaches, it may be possible to standardize levels of transparency according to requirements. For example, book recommendation systems need a little transparency, and therefore a lower level of transparency may be acceptable. On the other hand, military, juristical systems or autonomous vehicles may require higher levels of clarity and transparency. These levels of transparency have attempted to achieve common understandings of transparency to maintain and sustain communication between users and engineers through their standard organization.
This study will be a comprehensive literature review on the prediction of precious metals. It is seen that most of the studies have been estimated the gold value by using various AI approaches. The reason why gold is in great demand can be shown to be one of the most used and most valuable mines in the industry. Various artificial intelligence algorithms have been used on the gold prediction, but it is not known how the prediction is made due to the black-box approach. In this chapter, the authors took on the prediction of precious metals based on artificial intelligence methods. All studies point out that the prediction of precious metal based on artificial intelligence creates a much more precise prediction. The main aim is to understand whether or not there is an explainable model to convince investors to make reasonable investments and predict more precisely the value of precious metals in the related papers in the literature. Therefore, a systematic mapping study is conducted to survey the studies between the years 2017-2019.
Key Terms in this Chapter
Explainable Artificial Intelligence (XAI): Explained artificial intelligence is artificial intelligence programmed to define its purpose, decision-making process so that it can be understood by the average person.
Artificial Intelligence (AI): Development of computer systems that can perform tasks that require human intelligence and ability, such as visual perception, speech recognition, translation between languages and voice recognition.
Precious Metals (PMs): The precious metals are used in industry and as ornaments, they have high economic value, they are metallic elements that exist in nature.
Machine Learning: Machine learning is an artificial intelligence (AI) application that gives systems the ability to automatically learn and develop from experience without being explicitly programmed.
Convolutional Neural Network (CNN): Convolutional neural network is an evolution-based math operation that works to perform feature selection and classification tasks through data information.