Review of Big Data Applications in Finance and Economics

Review of Big Data Applications in Finance and Economics

Ulkem Basdas, M. Fevzi Esen
DOI: 10.4018/978-1-7998-3053-5.ch010
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Massively parallel processors and modern data management architectures have led to more efficient operations and a better decision making for companies to process and analyse such complex and large-scale data. Especially, financial services companies leverage big data to transform their business processes and they focus on understanding the concepts of big data and related technologies. In this chapter, the authors focus on the scope of big data in finance and economics. They discuss the need for big data towards the digitalisation of services, utilisation of social media and new channels to reach customers, demand for personalised services and continuous flow of vast amount of data in the sector. They investigate the role of big data in transformation of financial and economic environment by reviewing previous studies on stock market reading and monitoring (real-time algorithmic trading, high-frequency trading), fraud detection, and risk analysis. They conclude that despite the rapid development in the evolution of techniques, both the performance of techniques and area of implementation are still open to improvement. Therefore, this review aims to encourage readers to enlarge their vision on data mining applications.
Chapter Preview
Top

Introduction

The world has been experiencing the revolution in information and communication technologies (ICTs) in last couple of decades. Big data appeared as a revolutionary phenomenon that influenced decision-making processes. In the 1960s and 1970s, companies’ first attempts in data discovery for business purposes proceeded through various stages, shaped by heuristic decision making, simple reporting and statistical analysis (Figure 1). In the 1990s, most companies organized data collections in table based format with rows and columns and they used relational or hierarchical databases to store their data. For cross-functional activities, fast query processing and multiuser environment, they implemented extract, transform and load (ETL) processes that help enterprise data mitigate from day-to-day transactions to data warehouses. The volume of data was measured in gigabytes at the very most.

Figure 1.

Evolution of big data

978-1-7998-3053-5.ch010.f01

In the early 2000s, companies started to focus on value creation by operational data warehouses that accumulate business transactions. The following decade brought out different kinds of data sources that were actively used as content management repositories and networked storage systems to manage enterprise information, and the size of databases began to increase in volume and scale. Terabyte-scale bases were replaced by petabytes. Traditional form of data types were augmented by unstructered data that either typically have text-heavy format without a data model. To realize business benefits from being able to process high volume of data, companies put emphasis on the speed of new data creation. High velocity data underscored to process large amounts of data at high rates of speed, resulting companies became more analytical and data-driven.

In 2010s, the advent of wireless communication (i.e., WiFi, cellular, GPS, bluetooth, RFID) over a wide area made the connection of devices possible with each other nearly anywhere. This resulted in hundreds of petabytes moving across networks per day. Companies has broadened their data management strategies with considering many kinds of data such as multimedia files, e-mail messages, webpages and other kinds of business documents. As a result of this, data has become a proprietary resource because of its value.

The second half of the 2010s strove to bring a next frontier for innovation in big data. The interconnection of sensing and actuating devices enabled distributed file systems among the connected users, thereby allowing data storage and sharing through a real-time communication network. Such an innovation allowed many companies to allocate their resources for moving, storing and analyzing the huge amount of data by virtual infrastructures. According to NVP (2020), the percentage of companies investing in big data technologies amounted to 39.7% with an average spend of $50 million per company in 2018. The percentage of the companies increased in 2019 and expected to rise sharply to 64.8% in 2020. However, it was also stated that the vast majority of companies are struggling with business adoption of big data and only 37.8% created a data-driven environment (NVP, 2020).

Today, there is a dramatic increase in the amount of generated, mined and stored data, reaching a market size of $50 billion to reach $104.3 billion by 2026 (Fortune, 2019). Companies produce large amounts of raw data in daily basis via IoT, smart devices and cloud platforms. Further developments in the digital transformation enable companies to move with big data solutions using Artifical Intelligence (AI) for organizing and performing business tasks. According to Forrester (2019), big data is considered as a vital dominant driver of competitive advantage that refers the ability to outperform the rivals for businesses. Another big data market survey states that big data has enabled businesses to achieve their goals and that impact is expected to grow immensely over the next years (IDC, 2020).

Key Terms in this Chapter

Volume: It refers the size of data. Data attributes and number of data points are the identifying factors of data volume.

Big Data: Big data has a broad concept that includes large, comprehensive and always available data, which is nourished by various data sources such as information-sensing mobile devices, software logs, digital images and videos, GPS signals and wireless communication networks or so on.

Velocity: It is defined the frequency of data generation.

Risk Management: It involves prediction of bankruptcy of a firm (i.e., credit risk), detection of fraud (i.e., operational risk), or prediction of future stock returns (i.e., market risk).

Operational Risk: Operational risk refers the financial losses arising from either internal or external operational breakdowns. Internal factors can be inadequate internal processes, people, or systems whereas external event can be such as fraud, failure in controls, operational error, or natural disaster.

Veracity: Data veracity is a combination of four dimensions: completeness, consistence, correctness, and timeliness. These dimensions apply to both quantitative and qualitative data.

Variety: It is an expanded concept of data types and sources and it is typically defined as the combinations of different types of structures within the data such as semi-structured (e.g., XML, JSON), and unstructured data (e.g., audio, images, text, click streams).

Value: Data value refers to creating business value from new sources and kinds of data. The use of proper data and the ability of gathering meaningful results provide great resources for decision makers.

Complete Chapter List

Search this Book:
Reset